Canary deployment with Linkerd and Kubernetes

by Adam Świątkowski
6 mins read
Canary deployment

Every deployment of the new application version carries risks. To minimize these risks you can use canary deployment technique which is a practice to move a little part of traffic (e.g. 10%) to the new version. Thanks to Linkerd which is a service mesh solution for Kubernetes this approach can be achieved easily.

What is canary deployment?
What is canary deployment?

Prepare environment

The first thing which we need to do is install linkerd prerequisites before we apply it into our Kubernetes cluster. Let’s start with the following command:

curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh

Don’t forget to add linkerd to your path. If you will encounter any troubles in this step feel free to check the linkerd official guide here. If the above steps will be done in a proper way, below command should result as follows:

➜  ~ linkerd version
Client version: stable-2.11.2
➜  ~ linkerd check --pre                       

Linkerd core checks
===================

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

pre-kubernetes-setup
--------------------
√ control plane namespace does not already exist
√ can create non-namespaced resources
√ can create ServiceAccounts
√ can create Services
√ can create Deployments
√ can create CronJobs
√ can create ConfigMaps
√ can create Secrets
√ can read Secrets
√ can read extension-apiserver-authentication configmap
√ no clock skew detected

linkerd-version
---------------
√ can determine the latest version
√ cli is up-to-date

Status check results are √

Now it’s the time to apply linkerd as a service mesh to our Kubernetes cluster.

linkerd install | kubectl apply -f -

To proceed with canary deployment there’s a need to install viz extension for linkerd. Viz is used for metrics and visibility functionalities.

linkerd viz install | kubectl apply -f -

There’a also a need to install Linkerd SMI extension which will allow configuring TrafficSplit specification. Don’t forget about adding SMI to PATH variable of your OS.

curl --proto '=https' --tlsv1.2 -sSfL https://linkerd.github.io/linkerd-smi/install | sh
linkerd smi install | kubectl apply -f -

The last prerequisite is Flagger. Flagger will be responsible for automating the process of creating new Kubernetes resources, watching metrics and sending users to the new version of application.

kubectl apply -k github.com/fluxcd/flagger/kustomize/linkerd

Creating Kubernetes namespace to automate Linkerd injection

First of all let’s create a namespace for our application. Thank’s to linkerd.io/inject: enabled annotation, linkerd will be injected automatically to resources created inside namespace. Let’s create a namespace.yaml file with the bellow command:

apiVersion: v1
kind: Namespace
metadata:
  name: srv
  annotations:
    linkerd.io/inject: enabled

Apply this definition to our Kubernetes cluster:

kubectl apply -f namespace.yaml 

Another things which we need to create are service and deployment. Quick theory about what in Kubernetes are pods, deployments and services. Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A deployment is responsible for keeping a set of pods running. Service is responsible for enabling network access to a set of pods.

What are the differences between pod, deployment and service?
What are the differences between pod, deployment and service?

Kubernetes service definition

As mentioned in the previous section we need to create a service definition to allow communication to our testing pods. Let’s create a file named service.yaml and fill it with the below content:

apiVersion: v1
kind: Service
metadata:
  namespace: srv
  labels:
    app: servicetest
  name: servicetest
spec:
  ports:
  - name: "http"
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: servicetest

Let’s execute the above service definition with the below command:

kubectl apply -f service.yaml 

Kubernetes deployment definition

Another thing is to deploy our first version of application. For this purpose I’ve created a dedicated docker image which source you can find here. It’s purpose is to just read and print the content of selected env variable. Let’s create a file named deploymentV1.yaml with the below content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: servicetest1
  namespace: srv
  labels:
    app: servicetest

spec:
  replicas: 1
  selector:
    matchLabels:
      app: servicetest
      ver: "1"
  template:
    metadata:
      labels:
        app: servicetest
        ver: "1"
    spec:
      containers:
        - name: servicetest
          image: sz3jdii/service_test:latest
          ports:
            - containerPort: 8080
          env:
            - name: TEST_STRING
              value: service_test_A
          imagePullPolicy: Always

As in the previous steps let’s execute the deployment definition with the below command:

kubectl apply -f deploymentV1.yaml 

Configure Flagger

A canary deployment is a deployment strategy that releases an application or service incrementally to a subset of users. All infrastructure in a target environment is updated in small phases (e.g: 2%, 25%, 75%, 100%). Now we have deployed 1st version of application, so let’s prepare for deploying (in a canary way) version 2 to our cluster.

While Linkerd will be managing the actual traffic routing, Flagger automates the process of creating new Kubernetes resources, watching metrics and incrementally sending users over to the new version.

First thing needed is to create a canary definition file which instructs Flagger to watch for changes in selected resource (i.e. deployment sources). canaryDeployment.yaml source:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: servicetest1
  namespace: srv
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: servicetest1
  service:
    port: 8080
  analysis:
    interval: 10s
    threshold: 5
    stepWeight: 10
    maxWeight: 100
    metrics:
    - name: request-success-rate
      thresholdRange:
        min: 99
      interval: 1m
    - name: request-duration
      thresholdRange:
        max: 500
      interval: 1m

As you see in above listing, we’ve decided what kind of k8s’s resource (deployment in this example) as well as its name, have to be watched by Flagger. Let’s apply this definition to the cluster.

kubectl apply -f canaryDeployment.yaml 

To watch the changes in traffic let’s create a debug resource and apply it to the k8s cluster, from this point we’ll watch how the traffic is being changed due to the canary release. Let’s create a debug.yaml file and apply it to our cluster:

apiVersion: v1
kind: Pod
metadata:
  name: tools
  namespace: srv
  labels:
    app: tools
spec:
  containers:
  - name: tools
    image: sz3jdii/tools
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    
  restartPolicy: Always

Apply this definition to our cluster:

kubectl apply -f debug.yaml

After creating this pod, let’s connect to it:

kubectl  --namespace srv  exec tools -it --container tools  -- /bin/bash

Let’s check if we can properly connect to our V1 app from the debug pod:

for i in {1..10}; do curl servicetest1:8080 ; sleep 0.5; echo ""; done

You should see the similar response:

Version: service_test_A
Version: service_test_A
Version: service_test_A
Version: service_test_A
Version: service_test_A

Monitor the canary deployment from Linkerd

Flagger and Linkerd
Flagger and Linkerd

Linkerd offers a great feature called dashboard, which allows us to see what’s going on in our mess mesh graphically. To run in please run the below command:

linkerd viz dashboard &

Command will automatically open your default web browser and you should see similar window:

Linkerd Dashboard
Linkerd Dashboard

Now it’s time to proceed with deployment of V2. Let’s create a file called deploymentV2.yaml with the bellow content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: servicetest1
  namespace: srv
  labels:
    app: servicetest

spec:
  replicas: 1
  selector:
    matchLabels:
      app: servicetest
      ver: "1"
  template:
    metadata:
      labels:
        app: servicetest
        ver: "1"
    spec:
      containers:
        - name: servicetest
          image: sz3jdii/service_test:latest
          ports:
            - containerPort: 8080
          env:
            - name: TEST_STRING
              value: service_test_A
          imagePullPolicy: Always

Apply this definition using the below command:

kubectl apply -f deploymentV1.yaml 

And now canary deployment is taking place automatically. First of all check the dashboard, you should see similar view from the perspective of namespace description:

Linkerd traffic splits metrics
Linkerd traffic splits metrics

Take a deep look into Traffic Splits section, this is the most important one for us.

Now let’s create some traffic to our service. Let’s run the below command from the debug pod. You should see how occurrence of service B is more and more visible.

for i in {1..10}; do curl servicetest1:8080 ; sleep 0.5; echo ""; done

Result:

Version: service_test_A
Version: service_test_B
Version: service_test_B
Version: service_test_B

Check how the dashboard view changes also. Do you see how the weight (i.e. percentage traffic split from V1 to V2) changes?

Linkerd traffic splits changes in canary deployment
Linkerd traffic splits changes in canary deployment

Summarize

As you see there’s an easy and quite complex way of proceeding with canary deployment using Linkerd. If you are interested in the cloud topics don’t forget to read other of my articles:

I also encourage you to take a look at my Linkerd labs which you can find here.