Review & Case Study on Azure Kubernetes Service (AKS)
In this article you will see about the AWS SQS and its use-case.
Welcome to all
I Am Again Here With My New Article In The Field Of K8S .
I am glad to tell you , you will definitely learn something new from this article . 🔥🔥
I am going to discuss about AKS in depth.
Starting this article from the scretch.
So let’s get started.
What is Azure Kubernetes Service?
Microsoft Azure is a world-renown cloud platform for SMBs to large scale business, while Kubernetes is a modern-day approach that is rapidly becoming the regular methodology to manage cloud-native applications in a production environment. Azure Kubernetes Service (AKS) has brought both solutions together that allow customers to create fully-managed Kubernetes clusters quickly and easily.
AKS is an open-source fully managed container orchestration service that became available in June 2018 and is available on the Microsoft Azure public cloud that can be used to deploy, scale and manage Docker containers and container-based applications in a cluster environment.
Azure Kubernetes Service offers provisioning, scaling, and upgrades of resources as per requirement or demand without any downtime in the Kubernetes cluster and the best thing about AKS is that you don’t require deep knowledge and expertise in container orchestration to manage AKS.
AKS is certainly an ideal platform for developers to develop their modern applications using Kubernetes on the Azure architecture where Azure Container Instances are the pretty right choice to deploy containers on the public cloud. The Azure Container Instances help in reducing the stress on developers to deploy and run their applications on Kubernetes architecture.
Azure Kubernetes Service Benefits
Azure Kubernetes Service is currently competing with both Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE). It offers numerous features such as creating, managing, scaling, and monitoring Azure Kubernetes Clusters, which is attractive for users of Microsoft Azure. The following are some benefits offered by AKS:
- Efficient resource utilization: The fully managed AKS offers easy deployment and management of containerized applications with efficient resource utilization that elastically provisions additional resources without the headache of managing the Kubernetes infrastructure.
- Faster application development: Developers spent most of the time on bug-fixing. AKS reduces the debugging time while handling patching, auto-upgrades, and self-healing and simplifies the container orchestration. It definitely saves a lot of time and developers will focus on developing their apps while remaining more productive.
- Security and compliance: Cybersecurity is one of the most important aspects of modern applications and businesses. AKS integrates with Azure Active Directory (AD) and offers on-demand access to the users to greatly reduce threats and risks. AKS is also completely compliant with the standards and regulatory requirements such as System and Organization Controls (SOC), HIPAA, ISO, and PCI DSS.
- Quicker development and integration: Azure Kubernetes Service (AKS) supports auto-upgrades, monitoring, and scaling and helps in minimizing the infrastructure maintenance that leads to comparatively faster development and integration. It also supports provisioning additional compute resources in Serverless Kubernetes within seconds without worrying about managing the Kubernetes infrastructure.
Azure Kubernetes Service Benefits
Azure Kubernetes Service is currently competing with both Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE). It offers numerous features such as creating, managing, scaling, and monitoring Azure Kubernetes Clusters, which is attractive for users of Microsoft Azure. The following are some benefits offered by AKS:
Accelerate containerised application development
Easily define, deploy, debug and upgrade even the most complex Kubernetes applications and automatically containerise your applications. Use modern application development to accelerate time to market.
Add a full CI/CD pipeline to your AKS clusters with automated routine tasks and set up a canary deployment strategy in just a few clicks. Detect failures early and optimise your pipelines with deep traceability into your deployments.
Gain visibility into your environment with the Kubernetes resources view, control-plane telemetry, log aggregation and container health, accessible in the Azure portal and automatically configured for AKS clusters.
Increased operational efficiency
Rely on built-in automated provisioning, repair, monitoring, and scaling. Get up and running quickly and minimize infrastructure maintenance.
- Easily provision fully managed clusters with Prometheus based monitoring capabilities.
- Use Azure Advisor to optimize your Kubernetes deployments with real-time, personalized recommendations.
- Save on costs by using deeply discounted capacity with Azure Spot.
- Elastically add compute capacity with serverless Kubernetes, in seconds.
- Achieve higher availability and protect applications from datacenter failures using availability zones.
Build on an enterprise-grade, more secure foundation
- Dynamically enforce guardrails defined in Azure Policy at deployment or in CI/CD workflows. Deploy only validated images to your private container registry.
- Get fine-grained identity and access control to Kubernetes resources using Azure Active Directory.
- Enforce pod security context and configure across multiple clusters with Azure Policy. Track, validate, reconfigure, and get compliance reports easily.
- Achieve superior security with a hardened operating system image, automated patching, and more. Automate threat detection and remediation using Azure Security Center.
- Use Azure Private Link to limit Kubernetes API server access to your virtual network. Use network policy to secure your communication paths.
Run any workload in the cloud, at the edge or as a hybrid
Orchestrate any type of workload running in the environment of your choice. Whether you want to move .NET applications to Windows Server containers, modernise Java applications in Linux containers or run microservices applications in the public cloud, at the edge or in hybrid environments, Azure has the solution for you.
Learn about the Kubernetes core concepts and apply best practices in production.
Common uses for Azure Kubernetes Service (AKS)
Migrate your existing application to the cloud, build a complex application that uses machine learning or take advantage of the agility offered by a microservices architecture.
Lift and shift to containers with AKS
Easily migrate existing application to container(s) and run within the Azure managed Kubernetes service (AKS). Control access via integration with Azure Active Directory and access SLA-backed Azure Services such as Azure Database for MySQL using OSBA (Open Service Broker for Azure) for your data needs.
Data Flow
- User converts existing application to container(s) & publishes container image(s)to the Azure Container Registry
- Using Azure Portal or command line, user deploys containers to AKS cluster
- Azure Active Directory is used to control access to AKS resources
- Easily access SLA-backed Azure Services such as Azure Database for MySQL using OSBA (Open Service Broker for Azure)
- Optionally, AKS can be deployed with a VNET virtual network
Microservices with AKS
Use AKS to simplify the deployment and management of microservices based architecture. AKS streamlines horizontal scaling, self-healing, load balancing, secret management.
Architecture
Data Flow
- Developer uses IDE such as Visual Studio to commit changes to GitHub
- GitHub triggers a new build on Azure DevOps
- Azure DevOps packages microservices as containers and pushes them to the Azure Container Registry
- Containers are deployed to AKS cluster
- Users access services via apps and website
- Azure Active Directory is used to secure access to the resources
- Microservices use databases to store and retrieve information
- Administrator accesses via a separate admin portal
Secure DevOps for AKS
DevOps and Kubernetes are better together. Implementing secure DevOps together with Kubernetes on Azure, you can achieve the balance between speed and security and deliver code faster at scale. Put guardrails around the development processes using CI/CD with dynamic policy controls and accelerate feedback loop with constant monitoring. Use Azure Pipelines to deliver fast while ensuring enforcement of critical policies with Azure Policy. Azure provides you real-time observability for your build and release pipelines, and the ability to apply compliance audit and reconfigurations easily.
Architecture
Developers rapidly iterate, test, and debug different parts of an application together in the same Kubernetes cluster
Code is merged into a GitHub repository, after which automated builds and tests are run by Azure Pipelines
Code is merged into a GitHub repository, after which automated builds and tests are run by Azure Pipelines
Release pipeline automatically executes pre-defined deployment strategy with each code change
App telemetry, container health monitoring, and real-time log analytics are obtained using Azure Monitor
Data Flow
- Developers rapidly iterate, test, and debug different parts of an application together in the same Kubernetes cluster.
- Code is merged into a GitHub repository, after which automated builds and tests are run by Azure Pipelines.
- Release pipeline automatically executes pre-defined deployment strategy with each code change.
- Kubernetes clusters are provisioned using tools like Helm charts that define the desired state of app resources and configurations.
- Container image is pushed to Azure Container Registry.
- Cluster operators define policies in Azure Policy to govern deployments to the AKS cluster.
- Azure Policy audits requests from the pipeline at the AKS control plane level.
- App telemetry, container health monitoring, and real-time log analytics are obtained using Azure Monitor.
- Insights used to address issues and fed into next sprint plans.
Bursting from AKS with ACI
If you’d like to see us expand this article with more information, implementation details, pricing guidance, or code examples, let us know with GitHub Feedback!
Use the AKS virtual node to provision pods inside ACI that start in seconds. This enables AKS to run with just enough capacity for your average workload. As you run out of capacity in your AKS cluster, scale out additional pods in ACI without any additional servers to manage.
Architecture
Data Flow
- User registers container in Azure Container Registry
- Container images are pulled from the Azure Container Registry
- AKS virtual node, a Virtual Kubelet implementation, provisions pods inside ACI from AKS when traffic comes in spikes.
- AKS and ACI containers write to shared data store
How customers are using Azure Kubernetes Service (AKS)
Hafslund Nett (Hafslund) — the power grid operator that serves 1.5 million Norwegians — determined that legacy systems for reading meter data needed higher capacity and that externally developed software was difficult to manage. To address the issue, Hafslund chose to develop its own meter-system software, using Microsoft Azure as its cloud platform, Azure Kubernetes Service (AKS) to manage software containers, and Azure Monitor for containers to optimize container performance. Hafslund IT staff will soon save time managing their improved systems, and customers will benefit from higher reliability.
“We wanted a platform to speed development and testing but do it safely, without losing control over security and performance. That’s why Azure and AKS are the perfect fit for us.”
— Ståle Heitmann: Chief Technology Officer
Hafslund Nett
Hafslund Nett (Hafslund) owns and operates the regional power grid for more than 40 Norwegian communities in the Oslo area. The company is one of the most advanced utilities in Europe, with state-of-the-art technology to manage, monitor, and optimize operations at a reasonable price. To keep the lights on for the 1.5 million people in its coverage territory, Hafslund continually looks for modern ways to keep its IT systems highly effective. “We evaluate IT platforms, determine whether to buy software or develop it in-house, and establish guidelines, best practices, and rules for administering solutions,” explains Ståle Heitmann, Chief Technology Officer at Hafslund Nett. For example, the company is deploying thousands of smart-meter Internet of Things (IoT) devices that measure power consumption. “Our systems will manage the massive amounts of data the IoT devices generate — all in addition to our traditional role as a utility provider.“
Apply containerization and Azure Kubernetes Service
Based on experience with large-scale projects in other corporate divisions, Hafslund recognized the value of containerized applications and the efficiencies of scale that they offer. Working with Microsoft Partner Network members Aurum and Computas, Hafslund turned to Microsoft Azure and Azure Kubernetes Service (AKS) to create its own smart-meter software powered by efficient, industry-recognized tools.
Hafslund had used Azure on previous projects, so Heitmann knew the platform offers high performance and reliability. For example, he says, “We established highly secure networking between on-premises IT resources and Azure, which was a prerequisite for workloads we want to run with Kubernetes.” Additionally, he notes that Azure Active Directory readily supports role-based access control on Kubernetes clusters, which is a security measure that aligns with the company’s critical best practices.
Regarding AKS, Hafslund is planning for projects in the future that will benefit from containerization. The company is building a platform that uses AKS to support not only meter-reading but also most internally developed software covering three broad areas:
- Data integration — that is, microservices to implement representational state transfer (REST) interfaces used by an internal central data integration tool
- APIs that expose data both internally and externally
- Complex systems consisting of several interoperating services that interact directly with users
Emphasizing the point about the company’s use of AKS, Heitmann says,
“We are building our own new applications using microservices, and AKS is our choice for orchestrating their workloads.”
As part of its digital transformation efforts, shipping giant A.P. Moller — Maersk needed to streamline IT operations and optimize the value of its IT resources. Maersk adopted Microsoft Azure, migrated key workloads to the cloud, and modernized its open-source software, which included the adoption of Kubernetes on Azure. Maersk software engineers now spend less time on container software management and more time on innovation and value-added projects. The resulting business value is savings on resource costs, faster solution delivery time, and the ability to attract expert IT talent.
The key question we ask is, ‘Where does the cloud stop and where does our work begin?’ For the Connected Vessel program, Azure made the most business sense, and it promotes agility.
Rasmus Hald: Head of Cloud Architecture
A.P. Moller — Maersk
Headquartered in Copenhagen, A.P. Moller — Maersk moves things — a lot of things to a lot of places. It’s the biggest container-shipping company in the world. Shipping is a physical activity, but the company decided to make its operations increasingly digital. As Rasmus Hald, Head of Cloud Architecture at A.P. Moller — Maersk, puts it, “The Maersk strategy is to overlay physical container-shipping with digital services that strengthen customer engagement.” He notes that achieving this level of transformation requires thorough analysis of huge amounts of data from cargo ships, ports, and the company’s everyday activities worldwide.
Implementing a container strategy
As part of its overall cloud migration strategy, Maersk chose Azure Kubernetes Service (AKS) to handle the automation and management of its containerized applications. (A containerized application is portable runtime software that is packaged with the dependencies and configuration files it needs in order to run, all in one place.) AKS fully supports the dynamic application environment in Maersk without requiring orchestration expertise.
The company uses AKS to help set up, upgrade, and scale resources as needed, without taking its critical applications offline. “We want to focus on using containers as a way to package and run our code in the cloud, not focus on the software required to construct and run the containers,” Hald says. “Using Kubernetes on Azure satisfies our objectives for efficient software development. It aligns well with our digital plans and our choice of open-source solutions for specific programming languages.”
Additionally, Maersk chose Azure over other cloud platforms because Azure offers a wider variety of available services and global scalability that supports the number and type of tasks the company wants to undertake. “The key question we ask is, ‘Where does the cloud stop and where does our work begin?’ For the Connected Vessel program, Azure made the most business sense, and it promotes agility,” says Hald. “Just the fact that we’re asking questions like this illustrates our paradigm shift to support digital transformation.”
Helping millions of patients benefit from better care? All in a day’s work for worldwide healthcare technology company Siemens Healthineers. Siemens Healthineers is leading the digitalization of healthcare with its Digital Ecosystem, which helps health providers and solution developers bring more value to the delivery of care, ultimately improving the quality of insights derived from healthcare data. Siemens Healthineers uses Microsoft Azure to make solutions more accessible, and it uses Azure Kubernetes Service (AKS) and other tools for a fast, efficient, and competitive development pipeline.
Using Azure Kubernetes Service puts us into a position to not only deploy our business logic in Docker containers, including the orchestration, but also … to easily manage the exposure and control and meter the access.
Thomas Gossler: Lead Architect, Digital Ecosystem Platform
Siemens Healthineers
Using Azure services to streamline development processes
With a solid, dependable cloud platform in place, Siemens Healthineers is focusing on speeding development and implementing a continuous delivery approach. The company not only provides its own software products, but it has also decided to encourage other developers to use its infrastructure to deliver solutions and services and bring even more value to customers. This requires rethinking the development processes.
“Stepping from the development of our own added-value services into becoming more of a platform provider makes it important for us to deconstruct into microservices,” says Thomas Friese, Vice President, Digital Ecosystem Platform, at Siemens Healthineers. “With a microservice-based architecture, internal and external developers can independently release microservices at any point in time, which makes development faster and enables a continuous delivery approach completely based on Azure. We have set an astonishing speed for product development.”
Siemens Healthineers has taken a containerized approach to application development, which means it uses virtualization at the application operating system level as opposed to launching virtual machines. The company deploys its distributed applications in Docker containers, orchestrates those containers using Kubernetes, and monitors and manages the environment with Azure Kubernetes Service (AKS). Siemens Healthineers chose AKS because developers can quickly and easily work with their applications with minimal operations and maintenance overhead — provisioning, upgrading, and scaling resources without taking applications offline. With AKS, Siemens Healthineers can comfortably scale out its Kubernetes environment and scale back again if it doesn’t need the compute power, creating very high-density deployments on a microservices level.
“Using Azure Kubernetes Service puts us into a position to not only deploy our business logic in Docker containers, including the orchestration,” says Gossler, “but also, through application gateway and API management, to easily manage the exposure and control and meter the access continuously.”
Managing a stable runtime environment with AKS helps Siemens Healthineers realize shorter release cycles and achieve its desired continuous delivery approach. Highly regulated environments like healthcare typically require many steps to go from development to public release, but implementing a continuous delivery pipeline has simplified the process and helped Siemens Healthineers achieve the speed it wants. And when rolling out new software, the company appreciates that it doesn’t have to worry about breaking its production environment, due to AKS upgrade and failure domains — new releases get deployed smoothly to customers with zero downtime. “With numerous competitors, big and small, entering the healthcare market, we need to accelerate delivery of improved functionality and new features to our customers to stay ahead of the competition,” says Gossler.
Siemens Healthineers relies on a serverless application model to expedite development, and as a result, developers have a very short path from coding to actual operation of their code. The Siemens Healthineers development team also adopted Azure Functions to make application management more efficient. “We see many workloads coming that run occasionally or need to be updated more often,” says Gossler. “We consider Azure Functions a very good mechanism to speed up those workloads and manage the functionality during our daily operations. We definitely plan to make a lot more use of Azure Functions over time.”
With offices in over 60 countries, 10,000 staff and revenue exceeding $2 billion, Finastra is a significant Fintech force. Already an established leader in financial software and cloud solutions, its first platform offering, FusionFabric.cloud, launched to public cloud in June 2018.
Azure is a key differentiator for Finastra. Microsoft combines first-class technology with world-class brand recognition to create instant impact for our customers.
Félix Grévy: Global Head of Product Management
Finastra
Embracing Azure Kubernetes Service
Kubernetes is at the heart of the FusionFabric.cloud platform, allowing the orchestration of Docker containers. Fintech applications can run and scale with ease on Azure Kubernetes Service (AKS), the next-generation service that builds on the Azure Container Service Engine (ACS). Currently on an ACS-engine, Finastra plans to migrate to AKS. AKS brings a fundamental benefit to the development team at Finastra, as Grévy explains, “AKS gives us a pure Kubernetes and Docker imaging environment that we don’t have to manage ourselves. Our team has regained the resources to accelerate deployment and maximize our PaaS offering.”
The team uses Azure Container Registry (ACR) to simplify container development, while geo-replication helps run disaster recovery procedures for different locations. The ACR can also audit whether data residency is running in the same jurisdiction as the banks. Inbuilt application auto scaling allows the team to manage cost burden and react quickly to meet spiked demands of partners and customers.
Technical Story
When Robert Bosch GmbH set out to solve the problem of drivers going the wrong way on highways, the goal was to save lives. Other services like this existed in Germany, but precision and speed cannot be compromised. Could Bosch get precise enough location data — in real time — to do this? The company knew it had to try.
The result is the wrong-way driver warning (WDW) service and software development kit (SDK). Designed for use by app developers and original equipment manufacturers (OEMs), the architecture pivots on an innovative map-matching algorithm and the scalability of Microsoft Azure Kubernetes Service (AKS) in tandem with Azure HDInsight tools that integrate with the Apache Kafka streaming platform.
This article dives into the solution architecture.
When we started our journey on Azure, we were a really small team — just one or two developers. Our partnership with Microsoft, the support from their advisory teams, the great AKS documentation and enterprise expertise — it all helped us very much to succeed.
Bernhard Rode: software engineer
Bosch
How the solution works
The wrong-way driver warning solution runs as a service on Azure and provides an SDK. Service providers, such as smartphone app developers and OEM partners, can install the WDW SDK to make use of the service within their products. The SDK maintains a list of hotspots within which GPS data is collected anonymously. These hotspots include specific locations, such as segments of divided highways and on-ramps. Every time a driver enters a hotspot, the client generates a new ID, so the service remains anonymous.
Today the solution ingests approximately 6 million requests per day from devices emitting GPS data or from a partner’s back-end system. Anyone can download the SDK and try it out. The APIs grant a free request quota for test accounts. For production use, service providers request permission and then use the WDW SDK to register themselves for their own API authentication keys via the Azure API Management developer portal. Within their application, they configure the service’s endpoints by authenticating with their key for ingress and push notifications. The WDW service on Azure does the rest.
When a driver using a WDW-configured app or in-car system enters a hotspot, the WDW SDK begins to collect GPS signals and sensor events, such as acceleration and rotational data and heading information. These data points are packaged as observations and sent in the frequency of 1 Hertz (Hz) — one event per second — via HTTP to the WDW service on Azure, either directly or to the service provider’s back end, and then to Azure. The SDK supports both routes so that service providers stay in charge of the data that is sent to the WDW system.
If the WDW service determines that the driver is going the wrong way within a hotspot, it sends a notification to the originating device and to other drivers in the vicinity who are also running an app with the WDW SDK.
Additional Azure services
One goal of the project was to take advantage of Azure platform as a service (PaaS) tools whenever they would save time or costs. For example, Azure Cache for Redis provides fast, in-memory storage, while Azure Database for PostgreSQL delivers a highly available relational database that requires almost no administration. In addition, the team plans to migrate to Azure Data Explorer, a fast, fully managed data analytics service for real-time analysis on large volumes of streaming data.
The team also used the following services:
- Azure API Management provides the gateway to the back end. It pushes observations from client devices, currently serving about 6 million requests per day.
- Azure App Service was used to build and host multiple internal front ends used by the team for debugging and monitoring. For example, a real-time dashboard shows all the drivers currently passing a hotspot. App Service supports both Windows and Linux and works with the team’s automated deployment pipeline.
- Azure Content Delivery Network (CDN) uses the closest point of presence (POP) server to cache static objects locally, thus reducing load times, saving bandwidth, and speeding responsiveness of the WDW service.
- Azure Databricks is an Apache Spark–based analytics platform designed to support team collaboration. It enables Bosch data scientists, data engineers, and business analysts to make the most of the WDW service’s big data pipeline.
What we like about AKS is the simplified Kubernetes experience. It’s click and deploy, it’s click and scale. It’s infrastructure as code too, which is quite cool for us.
Christian Jeschke: product owner
Bosch
Conclusion
Businesses are transforming from on-premises to the cloud very quickly while building and managing modern and cloud-native applications. Kubernetes is one of the solutions that is open-sourced and supports building and deploying cloud-native apps with complete orchestration. Azure Kubernetes Service is a robust and cost-effective container orchestration service that helps you to deploy and manage containerized applications in seconds where additional resources are assigned automatically without the headache of managing additional servers.
AKS nodes are scaled-out automatically as the demand increases. It has numerous benefits such as security with role-based access, easy integration with other development tools, and running any workload in the Kubernetes cluster environment. It also offers efficient utilization of resources, removes complexities, easily scaled-out, and migrates any existing workload to a containerized environment and all containerized resources can be accessed via the AKS management portal or AKS CLI.