In this post I will show how to configure Kubernetes to avoid downtime during pod deployments. In my example I will be using an Asp.Net Core microservice deployed to a Kubernetes cluster running in Azure, but the ideas discussed here are not tech or cloud provider specific.
Production Deployment
During deployments Kubernetes uses a strategy called rolling update by default. In short this just means that Kubernetes will try to avoid downtime by synchronizing termination of existing pods with creation of new pods during the deployment window. The main goal is to avoid bringing down all existing pods before the new pods are operational. Rolling updates give you some control over the transition window by letting you specify how many pods you can afford to temporarily lose, if any, or to what degree you are willing to over provision the pod pool as new pods are spinning up before the old ones are terminated.
In the following sections I will discuss how to build on rolling updates to achieve zero downtime deployments. To verify that my cluster is operational during deployments I am running a load test during deployment. The idea is that any downtime should register as failed requests by the load test.
For load testing I am using a library called Artillery. The load test script can be found below:
When Is my new pod operational?
Rolling updates work well at the container level, but in practice the coordination has to be a bit more nuanced. It’s not enough to ensure that a container is deployed. You also have to make sure whatever is running inside the container is operational as well. Any lag in the startup of your application is not accounted for by the rolling update check. This means we may end up in a situation where Kubernetes assumes the new pod is up and running, and decides to bring down the old pod too early. As a result you may see failed incoming requests from hitting a new pod that is hosting an application that is still spinning up inside the container.
I forced this condition, and failures in my load tests, by simulating a delay in the startup of my microservice. As you can see from the code below, I am just doing a sleep, but in a real scenario the server might be waiting to load some data on startup.
How can we fix this?
Luckily we can add a secondary check in the form of a custom readinessProbe to enhance our definition of pod readiness. In my example I implemented the check as an http request against a simple ping controller in the api hosted in the pod. There is some flexibility in how you configure the check, but the basic idea is that Kubernetes will wait for a successful http status code from the http endpoint before declaring the pod ready for traffic.
The ready check can be found in the code listing below. I’ve also included the ping controller.
When is it safe to take down an existing pod?
The other scenario we have to consider is dropped request from terminating a pod that is currently handling requests. In my test setup I increased the likelihood of this by simulating long running requests (up to 10 seconds). After adding the delay seen in the code below I started to see failures in my load tests. Specifically I was seeing several ECONNRESETs in the load test report which is a good indication that requests are dropped by the server.
How can we fix this?
After reading this great article I learned that Kubernetes supports a preStop lifecycle hook that can be used to delay termination of pods. By adding a sleep in the preStop we give the old pod enough time to wrap up any requests already in flight before terminating. Also, by the time the delay elapses, the pod will have been taken out of the load balancer rotation.
The preStop hook configuration is included below:
Full Example
After adding both the readinessProbe and the preStop hook I was able to reliably execute load tests during deployment without errors.
I have included the final Kubernetes deployment .yml below.