At the moment one of the big trends in Continuous Delivery is to use containers to speed up value delivery. Containers enable you to run your application in an isolated environment that can be moved between different machines with guaranteed same behavior. Containers can therefore significantly speed up your delivery pipeline, enabling you to deliver features faster to your end users. Now that containers have become part of the Windows Operating System, how can we leverage this to run our existing Windows based .NET workloads like ASP.NET without too many modifications? How can you leverage container innovations, that are already available in the open source and Linux ecosystem? Innovations that enable you to simplify on-demand scaling, fault tolerance and zero downtime deployments of new features. In this article, I will give you a glimpse how you can deploy existing Windows based .NET workloads with Visual Studio Team Services to Azure Container Services, using Kubernetes as the cluster orchestrator.
Why Containers?
Isolation
When you create a container, its external dependencies are packed within the container image. Containers have a layer of protection between host and container and between containerized processes. Containers share the kernel of the host OS. A container relies on the host OS for virtualized access to CPU, memory, network, registry.
Immutability
Containers provide the capability of immutability. When you start a container based on a container image, you can make changes to the running environment, but the moment you stop the container and start a new instance of the container, then all changes have been discarded. If you want to capture the state change, then you can after you stopped a container, save the state to a new container image. When you create a new container instance based on the new image only then you will see the changed state.
Less Resource Intensive
Running a container compared to running a virtual machine requires very few resources. This is caused by the fact that the operating system is shared. When you start a Virtual Machine, then you boot a whole new operating system on top of the running operating system and you only share the hardware. With containers you share the memory, the disk and the CPU. this means the overhead of starting a container is very low, while it also provides great isolation.
Fast Start-up Time
Since running a container requires only few extra resources of the operating system, the startup-time of a container is very fast. The speed of starting a container is comparable to starting a new process. The only extra things the OS needs to setup is the isolation of the process so it thinks it runs on its own on the machine. This isolation is done at the kernel level and is very fast.
Improve Server Density
When you own hardware, then you want to utilize this hardware as good as possible. With virtual machines we made a first step in this direction, by sharing the hardware between multiple virtual machines. Containers take this one step further and enable us to utilize even better the memory, disk and CPU of the hardware available. Since we only consume the memory and CPU we need, we make better use of these resources. This means less idle running servers hence better utilization of the compute we have. Especially for cloud providers this is very important. The higher server density (the number of things you can do with the hardware you have) the more cost efficient the data-center runs. So it is not strange that Containers are now getting a lot of attention and a lot of new tooling is built around managing and maintaining Containerized solutions.
Why using Container Clusters?
When you want to run your application in production, you want to ensure your customers can keep using your services with as few outages as possible. Therefore, you need to build out an infrastructure that supports concepts like
- Automatic recovery after an application crash
- Fault tolerance
- Zero downtime deployments
- Resource management cross machines
- Failover
Besides this you want to manage this all in a simple way. This is where container clusters come in to play. The mainstream clusters that are available today are Docker Swarm, DC/OS and Kubernetes. In this article, I will show how to use Kubernetes. Docker Swarm is not really a production grade solution and it seems that Docker is more focused on their Docker data center solution. DC/OS and Kubernetes are the most used clusters in production and Kubernetes already supports windows agents. DC/OS will follow as well soon.
How to Create a cluster in Azure
The simplest way to create a container cluster is by using any of the public cloud providers. They all offer clusters that enable you to run your application at scale with a few clicks. Google Cloud Engine provides primarily a cluster based on Kubernetes. Amazon uses as default DC/OS. On Azure you can select which cluster orchestration solution you want to use when you create a cluster. We will have a closer look how we can use Azure.
Portal, Command-line or ACS engine
In the portal, you can search for Azure Container Services (ACS) and you will find the option to create a cluster. You have to define the number of Master Nodes and Agents and the Agent Operating system you want to use. Azure supports on Docker Swarm and Kubernetes based clusters, Windows agent nodes. This enables us to deploy our ASP.NET MVC application on Windows containers in a cluster.
From a Continuous delivery perspective, we always prefer to enable the creation of the infrastructure as part of our delivery pipeline So creating it via the command line is the preferred way of doing this, since it provides us the ability to repeat the steps we have taken and check it in as a provisioning script for future use to set up a new environment.
Azure has a new command-line interface 2.0 that supports the creation of ACS clusters.
The following command can be used to create a cluster:
az acs create –orchestrator-type=kubernetes –resource-group myresourcegroup –name= my-acs-cluster-name –dns-prefix= some-unique-value
The moment we create a cluster we will have a setup that looks as follows:
Master nodes
When we look at the cluster that will be created for us, you will find that the Master Nodes are Linux based virtual machines. The masters are responsible for managing the cluster and scheduling the containers based on the resource constraints we give to the deployment definitions. When you define a deployment you send commands to the master, which in its turn will schedule the containers to be run on the on the agents. The way we communicate with the master is through a command line tool called kubectl. This command line tool issues the commands against the API server running on the master nodes. The master nodes run a set of containers that support the cluster like e.g. the cluster DNS service and the scheduler engine.
Agent nodes
The agents run the kubelet containers, that manages the agent communication and interactions with the master. The way the master communicates to the agents is via the local network that is not exposed outside the cluster. If we want to expose a service (one or more containers) to the outside world, we can do this with a simple kubectl command, kubectl expose deployment <name of deployment> –port=port# which in its turn will expose the deployment via the Azure load balancer to the outside world. The cluster will manage the configuration and creation of the required load-balancer(s), the allocation of public IP addresses and configuring the load-balancing rules.
Deployments , Pods and Services
In a deployment, you describe a combination of Docker images you want to run in your cluster. This combination of images, including the shared storage options and run options, defines a Pod. A pod is the implicitly defined logical unit of container instance management. When container instances are created for the images in a pod, they will always run together on the same node. Let’s say you have an application that consists of two parts: a web API and a local cache, that are only effective when running on the same node. To accomplish this, you can define a deployment template that includes the two images from the command-line or in a yaml file. The deployment also specifies how many instances of pods will be started. By default, this is a single instance of a pod. For fault tolerance and scaling it is possible to increase the so-called replica count of your deployment to start multiple copies of the pods. The moment you start the deployment, the cluster is responsible for deploying the pods to the various nodes and balancing the resources in the cluster.
Every container instance created in a pod will request an IP address from the DHCP Server in the local cluster network. Additionally, each pod will get an internal DNS record based on the deployment name . The master node acts as both DHCP and DNS Server. Container instances become reachable from anywhere in the cluster, based on their DNS name and exposed ports. When you expose a deployment, the cluster will connect the container instances in pods to the external load-balancer, so they are reachable from outside the cluster as well.
Kubernetes uses the notion of a Service as the abstraction that defines the logical set of Pods and the policies to expose the endpoints we need. This is sometimes referred to as the micro-service. You can list which endpoints in your cluster are exposed to the outside world by running the command:
Kubectl get services
This then shows the list of services and the endpoint details like ip address and port on which the workloads are reachable.
Zero downtime replacement of deployments
When running applications at internet scale, you want to be able to deploy new features to the end users without any downtime of the application. Using a Kubernetes cluster makes this possible with the concept of a rolling update. Rolling updates enable you to update your container images in your container registry and then give a command to update the container images of the running container instances. This can be done with a single command-line: kubectl set image deployment <nameofdeployment> <nameofimage>=<reponame/newimagename>
In this command, you can precisely specify which image you want to change, since a deployment can contain multiple images.
The steps taken are the following:
- Spin up new pods on the various nodes.
- Drain traffic to the old pods
- Gate traffic to the new pods
In these steps the cluster ensures we always keep the minimum set of pods up and running so we can guarantee we can keep handling traffic while the deployment is in transit. The minimum set of replicas that always need to be up, can be specified in the deployment when created.
Deployments using Visual Studio Team Services (VSTS)
To ensure a robust, repeatable and reliable way of deploying your application to the cluster, you can use the build and release capabilities of VSTS. When you deploy a new feature to your application in the cluster, you will go through two primary phases. Phase 1, build, test and publish the container image. Phase 2, run the new image in the various test environments and finally deploy it to the cluster, using the zero-downtime deployment capability.
Phase 1, build the container image
In phase 1 we use the VSTS build infrastructure. For this we simply build our container based on a docker file we check into source control. In the build you define a step where we build the image using the docker file that picks up the build artifacts from your application. (dll’s, configuration, web content, etc) You can see an example here in the screenshot:
Phase 2, test and deploy to production
In this phase we use the Release management part of VSTS, that uses the same agent infrastructure as the build. You can define a set of environments where you first validate the new feature(s). The moment you have gathered enough evidence and confidence the new application runs as expected you can then move to the deployment environment and deploy to production. In VSTS you can specify tasks that need to be run in each environment. Below you can see the series of steps you can use to test the image you just created in the first phase, by running Docker tasks that start the container. Next, you see a task for testing the running container and the final step is to stop the container.
Deploying to production is done in the production environment with a set of tasks. These tasks execute the previous described command-line tool kubectl. For this to work you do need to install the kubectl binaries on the agent machine and add location to the %path% system environment variable. From that moment on, you can issue any kubectl command to the cluster to create or update deployments. The flow for the deploy to production is shown in the next screenshot:
Improve speed of value delivery
In this article I introduced you to the concept of containerized delivery using containers and Azure ACS. It is now possible to run your existing ASP.NET applications in containers because Microsoft added containers and Docker support to their operating system Windows Server 2016 and Windows 10. The open source cluster orchestrators like Kubernetes, DC/OS and Docker Swarm also enabled support for Windows containers, unlocking containerized delivery now also in the Windows ecosystem. Because containers provide a mechanism to very easily move them around in different environments, guaranteeing exact same behavior, it now becomes much simpler and faster to deploy to both your test and production environment. When you add to this the flexibility of scaling, fault tolerance and zero downtime deployments with clusters, then you can really improve the speed of feature delivery to your customers.
Conclusion
Containerized delivery is now also possible for your existing workloads on Windows like ASP.NET. You can utilize Azure ACS clusters with a cluster orchestrator like Kubernetes to manage your workloads at scale with much more ease of deployment than in the past. With containerized delivery we simplify the build, test and deployment pipelines and we significantly improve our delivery cycle time.
This article is part of XPRT. magazine.