Azure Kubernetes Service

Arpit Sironiya
11 min readSep 28, 2021

What is Azure Kubernetes Service?

Microsoft Azure is a world-renown cloud platform for SMBs to large scale business, while Kubernetes is a modern-day approach that is rapidly becoming the regular methodology to manage cloud-native applications in a production environment. Azure Kubernetes Service (AKS) has brought both solutions together that allow customers to create fully-managed Kubernetes clusters quickly and easily.

AKS is an open-source fully managed container orchestration service that became available in June 2018 and is available on the Microsoft Azure public cloud that can be used to deploy, scale and manage Docker containers and container-based applications in a cluster environment.

Azure Kubernetes Service offers provisioning, scaling, and upgrades of resources as per requirement or demand without any downtime in the Kubernetes cluster and the best thing about AKS is that you don’t require deep knowledge and expertise in container orchestration to manage AKS.

AKS is certainly an ideal platform for developers to develop their modern applications using Kubernetes on the Azure architecture where Azure Container Instances are the pretty right choice to deploy containers on the public cloud. The Azure Container Instances help in reducing the stress on developers to deploy and run their applications on Kubernetes architecture.

Azure Kubernetes Service Benefits

Azure Kubernetes Service is currently competing with both Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE). It offers numerous features such as creating, managing, scaling, and monitoring Azure Kubernetes Clusters, which is attractive for users of Microsoft Azure. The following are some benefits offered by AKS:

  • Efficient resource utilization: The fully managed AKS offers easy deployment and management of containerized applications with efficient resource utilization that elastically provisions additional resources without the headache of managing the Kubernetes infrastructure.
  • Faster application development: Developers spent most of the time on bug-fixing. AKS reduces the debugging time while handling patching, auto-upgrades, and self-healing and simplifies the container orchestration. It definitely saves a lot of time and developers will focus on developing their apps while remaining more productive.
  • Security and compliance: Cybersecurity is one of the most important aspects of modern applications and businesses. AKS integrates with Azure Active Directory (AD) and offers on-demand access to the users to greatly reduce threats and risks. AKS is also completely compliant with the standards and regulatory requirements such as System and Organization Controls (SOC), HIPAA, ISO, and PCI DSS.
  • Quicker development and integration: Azure Kubernetes Service (AKS) supports auto-upgrades, monitoring, and scaling and helps in minimizing the infrastructure maintenance that leads to comparatively faster development and integration. It also supports provisioning additional compute resources in Serverless Kubernetes within seconds without worrying about managing the Kubernetes infrastructure.

Azure Kubernetes Service Features

Microsoft Azure offers Azure Kubernetes Service that simplifies managed Kubernetes cluster deployment in the public cloud environment and also manages health and monitoring of managed Kubernetes service. Customers can create AKS clusters using the Azure portal or Azure CLI and can manage the agent nodes.

A template-based deployment using Terraform and Resource Manager templates can also be chosen to deploy the AKS cluster that manages the auto-configuration of master and worker nodes of the Kubernetes cluster. Some additional features such as advanced networking, monitoring, and Azure AD integration can also be configured. Let’s take a look into the features that Azure Kubernetes Service (AKS) offers:

Open-source environment with enterprise commitment

Microsoft has inducted the number of employees in last couple of years to make Kubernetes easier for the businesses and developers to use and participate in open-source projects and became the third giant contributor to make Kubernetes more business-oriented, cloud-native, and accessible by bringing the best practices and advanced learning with diverse customers and users to the Kubernetes community.

Nodes and clusters

In AKS, apps and supporting services are run on Kubernetes nodes and the AKS cluster is a combination of one or more than one node. And, these AKS nodes are run on Azure Virtual Machines. Nodes that are configured with the same configuration are grouped together called node pool. Nodes in the Kubernetes cluster are scaled-up and scaled-down according to the resources are required in the cluster. So, nodes, clusters, and node pools are the most prominent components of your Azure Kubernetes environment.

Role-based access control (RBAC)

AKS easily integrates with Azure Active Directory (AD) to provide role-based access, security, and monitoring of Kubernetes architecture on the basis of identity and group membership. You can also monitor the performance of your AKS and the apps.

Integration of development tools

Another important feature of AKS is the development tools such as Helm and Draft are seamlessly integrated with AKS where Azure Dev Spaces can provide a quicker and iterative Kubernetes development experience to the developers. Containers can be run and debugged directly in Azure Kubernetes environment with less stress on the configuration.

AKS also offers support for Docker image format and can also integrate with Azure Container Registry (ACR) to provide private storage for Docker images. And, regular compliance with the industry standards such as System and Organization Controls (SOC), Payment Card Industry Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), and ISO make AKS more reliable across various business.

Running any workload in Azure Kubernetes Service

You can orchestrate any type of workload running in the AKS environment. You can move .NET apps to Windows Server containers, modernize Java apps in Linux containers, or run microservices in Azure Kubernetes Service. AKS will run any type of workload in the cluster environment.

Removes complexities

AKS removes your implementation, installation, maintenance, and security complexities in Azure cloud architecture. It also reduces substantial costs where no per-cluster charges are being imposed on you.

Azure Kubernetes Service Use Cases

We’ll take a look at different use cases where AKS can be used.

  • Migration of existing applications: You can easily migrate existing apps to containers and run them with Azure Kubernetes Service. You can also control access via Azure AD integration and SLA-based Azure Services like Azure Database using Open Service Broker for Azure (OSBA).
  • Simplifying the configuration and management of microservices-based Apps: You can also simplify the development and management of microservices-based apps as well as streamline load balancing, horizontal scaling, self-healing, and secret management with AKS.
  • Bringing DevOps and Kubernetes together: AKS is also a reliable resource to bring Kubernetes and DevOps together for securing DevOps implementation with Kubernetes. Bringing both together, it improves the security and speed of the development process with Continuous Integration and Continuous Delivery (CI/CD) with dynamic policy controls.
  • Ease of scaling: AKS can also be applied in many other use cases such as ease of scaling by using Azure Container Instances (ACI) and AKS. By doing this, you can use AKS virtual node to provision pods inside Azure Container Instance (ACI) that start within a few seconds and enables AKS to run with required resources. If your AKS cluster is run out of resources, if will scale-out additional pods automatically without any additional servers to manage in the Kubernetes environment.
  • Data streaming: AKS can also be used to ingest and process real-time data streams with data points via sensors and perform quick analysis.

Azure Kubernetes Service Pricing

AKS is a free container service where nothing will be charged for Kubernetes cluster management. You’ll have to pay only for the cloud resources such as VMs, storage, and network resources you consume makes it the most cost-effective container orchestration service in the market. Microsoft Azure introduced the Container Services calculator to calculate the estimated cost of the consumed or required resources.

For this, all you need to create a free account, deploy and manage your Kubernetes environment while building microservices apps, deploying Kubernetes cluster, monitoring, and managing Kubernetes environment.

Conclusion

Businesses are transforming from on-premises to the cloud very quickly while building and managing modern and cloud-native applications. Kubernetes is one of the solutions that is open-sourced and supports building and deploying cloud-native apps with complete orchestration. Azure Kubernetes Service is a robust and cost-effective container orchestration service that helps you to deploy and manage containerized applications in seconds where additional resources are assigned automatically without the headache of managing additional servers.

AKS nodes are scaled-out automatically as the demand increases. It has numerous benefits such as security with role-based access, easy integration with other development tools, and running any workload in the Kubernetes cluster environment. It also offers efficient utilization of resources, removes complexities, easily scaled-out, and migrates any existing workload to a containerized environment and all containerized resources can be accessed via the AKS management portal or AKS CLI.

Azure Kubernetes Services

Azure Kubernetes Service (AKS) is a fully-managed service that allows you to run Kubernetes in Azure without having to manage your own Kubernetes clusters. Azure manages all the complex parts of running Kubernetes, and you can focus on your containers. Basic features include:

  • Pay only for the nodes (VMs)
  • Easier cluster upgrades
  • Integrated with various Azure and OSS tools and services
  • Kubernetes RBAC and Azure Active Directory Integration
  • Enforce rules defined in Azure Policy across multiple clusters
  • Kubernetes can scale your Nodes using cluster autoscaler
  • Expand your scale even greater by scheduling your containers on Azure Container Instances

Azure Kubernetes Best Practices

Cluster Multi-Tenancy

  • Logically isolate clusters to separate teams and projects in an effort to try to minimize the number of physical AKS clusters you deploy
  • Namespace allows you to isolate inside of a Kubernetes cluster
  • Same best practices with hub-spoke but you do it within the Kubernetes cluster itself.

Scheduling and Resource Quotas

  • Enforce resource quotas — Plan out and apply resource quotas at the namespace level
  • Plan for availability
  • Define pod disruption budgets
  • Limit resource intensive applications — Apply taints and tolerations to constrain resource intensive applications to specific nodes

Cluster Security

Azure AD and Kubernetes RBAC integration

  • Bind your Kubernetes RBAC roles with Azure AD Users/Groups
  • Grant your Azure AD users or groups access to Kubernetes resources within a namespace or across a cluster

Kubernetes Cluster Updates

  • Kubernetes releases updates at a quicker pace than more traditional infrastructure platforms. These updates usually include new features, and bug or security fixes.
  • AKS supports four minor versions of Kubernetes
  • Upgrading AKS clusters are as simple as executing a Azure CLI command. AKS handles a graceful upgrade by safely cordon and draining old nodes in order to minimize disruption to running applications. Once new nodes are up and containers are running, old nodes are deleted by AKS.

Node Patching

Linux

AKS automatically checks for kernel and security updates on a nightly basis and if available AKS will install them on Linux nodes. If a reboot is required, AKS will not automatically reboot the node, a best practice for patching Linux nodes is to leverage the kured (Kubernetes Reboot Daemon) which looks for the existence of /var/run/reboot-required file (created when a reboot is required) and will automatically reboot during a predefined scheduled time.

Windows

The process for patching Windows nodes is slightly different. Patches aren’t applied on a daily basis like Linux nodes. Windows nodes must be updated by performing an AKS upgrade which creates new nodes on the latest base Windows Server image and patches.

Pod Identities

If your containers require access to the ARM API, there is no need to provide fixed credentials that must be rotated periodically. Azure’s pod identities solution can be deployed to your cluster which allows your containers to dynamically acquire access to Azure API and services through the use of Managed Identities (marked Azure MSI in the diagram below).

Limit container access

Avoid creating applications and containers that require escalated privileges or root access.

Monitoring

As AKS is already integrated with other Azure services, you can use Azure Monitor to monitor containers in AKS.

  • Toggled based implementation, can be enabled after the fact or enforced via Azure Policy
  • Multi and Cluster specific views
  • Integrates with Log Analytics
  • Ability to query historic data
  • Analyze your Cluster, Nodes, Controllers, and Containers
  • Alert on Cluster & Container performance by writing customizable Log Analytics search queries
  • Integrate Application logging and exception handling with Application Insights

Real Life Example

Logicworks is a Microsoft Azure Gold Partner that helps companies migrate their applications to Azure. In the example below, one of our customers was looking to deploy and scale their public-facing web application on AKS in order to solve for the following business use case:

  • Achieve portability across on-prem and public clouds
  • Accelerate containerized application development
  • Unify development and operational teams on a single platform
  • Take advantage of native integration into the Azure ecosystem to easily achieve:
  • Enterprise-Grade Security
  • Azure Active Directory integration
  • Track, validate, and enforce compliance across Azure estate and AKS clusters
  • Hardened OS images for nodes
  • Operational Excellence
  • Achieve high availability and fault tolerance through the use of availability zones
  • Elastically provision compute capacity without needing to automate and manage underlying infrastructure.
  • Gain insight into and visibility into your AKS environment through automatically configured control plane telemetry, log aggregation, and container health

The customer’s architecture includes a lot of the common best practices to ensure we can meet the customers business and operational requirements:

Cluster Multi-Tenancy

SDLC environments are split across two clusters isolating Production from lower level SDLC environments such as dev/stage. The use of namespaces provides the same operation benefits while saving cost and operational complexity by not deploying an AKS cluster per SDLC environment.

Scheduling and Resource Quotas

Since multiple SDLC environments and other applications share the same cluster, it’s imperative that scheduling and resource quotas are established to ensure applications and the services they depend on get the resources required for operation. When combined with cluster autoscaler we can ensure that our applications get the resources they need and that compute infrastructure is scaled in when they need it.

Azure AD integration

Leverages Azure AD to authenticate/authorize users to access and initiate CRUD (create, update, and delete) operations against AKS clusters. AAD integration makes it convenient and easy to unify layers of authentication (Azure and Kubernetes) and provide the right personnel with the level of access they require to meet their responsibilities while adhering to principle of least privilege

Pod Identities

Instead of hardcoding static credentials within our containers, Pod Identity is deployed into the default namespace and dynamically assigns Managed Identities to the appropriate pods determined by label. This provides our example application the ability to write to Cosmos DB and our CI/CD pipelines the ability to deploy containers to production and stage clusters.

Ingress Controller

Ingress controllers bring traffic into the AKS cluster by creating ingress rules and routes, providing application services with reverse proxying, traffic routing/load balancing, and TLS termination. This allows us to evenly distribute traffic across our application services to ensure scalability and meet reliability requirements.

Monitoring

Naturally, monitoring the day-to-day performance and operations of our AKS clusters is key to maintaining uptime and proactively solving potential issues. Using AKS’ toggle-based implementation, application services hosted on the AKS cluster can easily be monitored and debugged using Azure Monitor.

MORE WEBSITE LINK

https://thenewstack.io/kubernetes-deep-dive-and-use-cases/

--

--

Arpit Sironiya

Ansible / Flutter / Hybrid Multi-Cloud / GCP / EKS / Kubernetes / DevOps / MlOps / Docker Expertise / RHCSA / Advance JAVA / Python Learner / Arth Learner