Wednesday 25 July 2018

Using Helm to deploy to a kubernetes cluster pulling images from a private container registry

Background


Kubernetes is a great platform for deploying containerized applications. A typical production system makes use of a number of docker images. To run and monitor these images manually is complicated. Kubernetes takes care of this task. It starts up given number of containers, monitors the health of each container, creates new containers for each failing container and has a mechanism of managing secrets and configuration maps that can be used by container images. 

Deployment to a Kubernetes cluster can be done by creating each k8s object manually using KubeCtl command line. This is of course not a feasible option for any production deployment. 

An alternate is to define all configuration in code by declaring them one or more YAML files. This is a better approach, however, it would still require that all kubernetes objects are created in order and doesn't scale well for larger of kubernetes objects.

Enter Helm. It is is a package manager for kubernetes and provides functionality to define, install, update and delete all kubernetes objects in your application with a single command.

This this post, I will explain how to deploy a simple kubernetes application. I will use docker images that reside in a private dockerhub repository, To authenticate with dockerhub, my credentials will be stored in kubernetes secret.


Example Application Blueprint


We will deploy simple Node-Js web application called tiresias. It's stored in a private Dockerhub repository. Our deployment will include 

1) A Kubernetes deployment with a replica-set of 2 (two instances of our containers will be running). 

2) In addition, it would include Kubernetes Service to provide a single endpoint, and 

3) An Ingress object to allow external traffic to hit the Service endpoint. 

Our YAML file look as below

apiVersion: v1
kind: Service
metadata:
  name: tiresias
  labels:
    app: tiresias
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: tiresias
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tiresias
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: tiresias
    spec:
      containers:
        - name: private-reg-container
          image: tiresias/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: regcred
  selector:
    matchLabels:
      app: tiresias
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tiresius
spec:
  backend:
    serviceName: tiresias
    servicePort: 80

Before running the above yaml with KubeCtl apply -f command, we need to create the secret named regcred. Now this is rather simple application but the dependency on secret is an indication on how dependencies between kubernetes objects make working only with "kubectl apply -f" a nightmare.


Creating our Helm Chart

We create a new Helm chart called "auto-deploy" by executing 

helm create auto-deploy

This creates a new "auto-deploy" folder with the following contents



Lets discuss each file and folder
  • Chart.yaml - Contain information about Helm chart such as Api Version, Name, Description, etc. 
  • values.yaml - Contains values of each of the variables that are used in template
  • Charts/ - Contains charts onto which the current chart is dependent upon.
  • templates/ Contains templates for deployment, which along with values.yaml file, generates Kubernetes configuration.
We delete the contents of the templates directory to create our objects from scratch.

Charts.yaml

Before we start creating templates, lets set the information of our chart by setting the content to following text

apiVersion: v1
appVersion: "1.0"
description: A Helm Chart for Tiresias
name: auto-deploy
version: 0.1.0

values.yaml

For our simplistic application, the only variables we need are related to authentication with our private repository in dockerhub. The contents of our values.yaml file is shown below

imageCredentials:
  name: dockerhub
  registry: https://index.docker.io/v1/
  username: TODO
  password: TODO

We have four items in the file. We set the name, image registry URL, user name and password. We do not want to put our username / password in the text file, so the variables are just a place holder. We will pass the values as command line arguments.


Working with Secret


To be able to download image from our private image repo, we would need to set up a secret of type kubernetes.io/dockerconfigjson. 

Kubernetes stores the secrets in a base64 encoded string, so we would need base64 encode our username / password. We will write a function for it. 

We create two files in the templates directory

1) secret.yaml
2) imagePullSecret.yaml


secret.yaml

The contents of secret.yaml is as below

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Values.imageCredentials.name }}
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: {{ template "imagePullSecret" . }}

Lets dissect the contents a bit. We are creating a simple Secret object. The name is set using to the value of the variable imageCredentials.name. The data is set using the template imagePullSecret, which is detailed as below.

imagePullSecret.yaml

In the the template file imagePullSecret.yaml, we define a function that spits out the base64 encoded string for username and password. The contents of the file is the text below

{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" 
.Values.imageCredentials.registry (printf "%s:%s" 
.Values.imageCredentials.username 
.Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}

The method is using values from variables imageCredentials.registry, imageCredentials.username and imageCredentials.password and using b64enc method to generate a base64 encoded string and printing it out.


Deployment, Service and Ingress


Now that secret is configured correctly, the next task is to set up our other three kubernetes objects i.e. Deployment, Service and Ingress. The definition for our objects is same as defined in the YAML file above except that we break it down in three different yaml files

deployment.yaml

The content of deployment.yaml file is below. Note the use of variable to reference the name of image pull secret.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tiresias
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tiresias
  template:
    metadata:
      labels:
        app: tiresias
    spec:
      containers:
        - name: tiresias-dockerhub
          image: tiresias/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: {{ .Values.imageCredentials.name }}

service.yaml

The content of service.yaml file is as below.
apiVersion: v1
kind: Service
metadata:
  name: tiresias-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: tiresias

ingress.yaml

The content of ingress.yaml file is as below. 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tiresius
  annotation:
    http.port: "443"
spec:
  backend:
    serviceName: tiresias-svc
    servicePort: 80


Executing our Helm chart

Using helm to deploy kubernetes objects simply involves executing the helm install command. Since, we have a couple of variables where values need to be provided using command line arguments. Our command to execute our chart would look like below
helm install ./auto-deploy --set imageCredentials.username=youruser,imageCredentials.password=yourpassword
As you can see, it is a breeze to work with. The output looks some like as follow 
NAME:   veering-panther
LAST DEPLOYED: Wed Jul 25 12:32:22 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME       TYPE                            DATA  AGE
dockerhub  kubernetes.io/dockerconfigjson  1     0s

==> v1/Service
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
tiresias-svc  ClusterIP  10.106.163.221         80/TCP   0s

==> v1/Deployment
NAME      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
tiresias  2        0        0           0          0s

==> v1beta1/Ingress
NAME      HOSTS  ADDRESS  PORTS  AGE
tiresius  *      80       0s

==> v1/Pod(related)
NAME                       READY  STATUS   RESTARTS  AGE
tiresias-646dbdc7b9-b9p7z  0/1    Pending  0         0s
tiresias-646dbdc7b9-h8w2k  0/1    Pending  0         0s


You can see that a couple of kubernetes pods are set up and are in pending state right now. If you want to see current installation, you can run the following command

helm list
The output looks like following
NAME            REVISION        UPDATED                         STATUS          CHART               NAMESPACEveering-panther 1               Wed Jul 25 12:32:22 2018        DEPLOYED        auto-deploy-0.1.0   default

Note the name given by helm to your installation. If you want to delete all kubernetes objects, the simple helm command to do it is 

helm delete veering-panther

Conclusion


In this post, I explained using Helm to deploy a simple containerized application. For any production application of any size, using a simple helm command is much simpler and easier than running a series of kubectl commands. I hope you find this post useful. Please feel free to leave your comments / feedback.

Tuesday 17 July 2018

Using VSTS Release Management to create a Continuous Delivery pipeline for a Kubernetes service

Introduction


My last blog post was about deploying my dockerized web application to Azure Kubernetes Service. In that post, I explained how to manually deploy docker images to a Kubernetes cluster on AKS. Though deploying manually might be a fulfilling experience, it is hardly something that we would do for an actual production system. Rather, we would want to create a deployment pipeline that automates the entire process.

In this blog post, we would create a Release using VSTS Release Management and a deployment pipeline for deploying to a Kubernetes cluster on AKS.


Creating a Release


The first step is to create a new release. From the "Build and release" menu on your VSTS project, click on the tab "Releases". Click the + button and select "Create release definition".



We are taken to the release definition page. From the template, select "Deploy to a Kubernetes Cluster".


For the names, we named our environment "dev" and gave our release the name of "DevRelease". Our deployment pipeline at this stage looks like a blank slate as shown below


Selecting Artifacts


The input to our deployment pipeline are two artifacts

1) The docker image of our web application. 

This is produced by our VSTS builds and described in the post "Creating CI and Deployment builds for docker images". Our build pushed our images out to docker hub, so we will set up our pipeline to download it from there.

2) A YAML file defining our Kubernetes service. 

Our YAML file looks like following and is already described in the post "Deploying your dockerized application to Azure Kubernetes Service". We saved it in a oneservice.yaml file and committed it to our source git repository.

apiVersion: v1
kind: Service
metadata:
  name: aspnetvuejs
  labels:
    app: asapnetvuejs
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: aspnetvuejs
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspnetvuejs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: private-reg-container
          image: aspnetvuejs/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: docker-registry


To select the docker image artifact, click on the "+ Add" button in the Artifacts section. From the list, click option "Docker Hub".




From the list of options, select the service endpoint for your Docker Hub repository. If it's not already set up, click on the Manage button to select it.  Select the Namespace and Repository and give your source an alias. Click the Add button add your artifact.

Your application is how shown in list of Artifacts.

Now, we need to add the YAML flie containing definition of our service. To do this, click the "+ Add" button in the Artifacts section again. This time, select option "Git".  From the options displayed, select your team project and the Git repository. For branch, we selected branch "master" and for version selected latest for default version. Our final selection for Git looks as following



Click on the Add button to add the artifact. Now both the input artifact to our deployment pipeline is setup.


Setting Deployment


The next step is to configure our deployment process. Since we selected template "Deploy to a Kubernetes cluster", we already have a task setup in our deployment process as shown by the "1 phase, 1 task" link. Click on the link to view details of the deployment process. We have one task in place to run kubectl apply.



For the task arguments, we need to provide a Kubernetes Service connection. To do this, we clicked on the + New button



1) We give our connection the name "dev".

2) Set the Server URL. You can find it in the Azure portal by visiting the properties page of your Kubernetes service on Azure portal


3) To get your KubeConfig details, type in the following on your Cloud shell session

az aks get-credentials --resource-group kubedemo --name kubedemo
Your configurations are copied to the .kube/config file. Copy the contents of the file and past it in your connection string.

4) Select option "Accept Untrusted Certificate".
5) Click on Verify Connection to verify your connection. The deployment task is now ready to deploy to our Kubernetes cluster.


We will keep our kubectl command as apply and select the "Use configuration files" checkbox so that we can pass the YAML file from our Git repository.

For the Configuration file option, click on the ellipses button. From the Linked Artifacts, select the Git Artifact and select the YAML file we pushed out to our Git source code repository earlier. Click OK and our deployment process is ready for action.

The final and most important step is to Save our build release definition by clicking on the save button.


Running Deployment


At this stage, we have setup a build but it hasn't been deployed to an environment yet. To start a deployment, go back to the Release definitions view by clicking on Release link. Our new release is shown in the list, To deploy, we click on the ellipses with the release and click on "+ Release". A dialog appears asking about target environment and version for our artifacts. Selected the environment "dev" and the latest commit on our git repository. Click on the Create button to deploy to our Kubernetes Service.




Conclusion

This is the final post of my series of posts about creating docker images and deploy them to a Kubernetes cluster in Azure Cluster Service. In the series of posts, I explained

Monday 16 July 2018

Deploying your dockerized application to Azure Kubernetes Service

Introduction


My last two blog posts were about creating docker images for an ASPNet/Javascript web application. In the first post, I described considerations to produce an optimized image. The second post was about creating a CI/CD process for producing docker images.

Though producing nice lean docker images is good karma, they need to be deployed to a container orchestration system to be deployed in a productionized environment.

The three most popular container orchestration system are Kubernetes, Mesosphere and Docker Swarm. Of these three, Kubernetes is arguably the most popular and we are going to use it to run our container. Running your own kubernetes cluster in a production environment is ostensible. Luckily all major cloud vendor provide kubernetes as a service e.g. Google Cloud has a Google Kubernetes Engine,  Amazon AWS provides the Elastic Container Service for Kubernetes (Amazon EKS) and Microsoft Azure has the Azure Kubernetes Service (AKS). We are going to use Azure Kubernetes Service to run our containers.

Provisioning a Azure Kubernetes Service


1) Log on to the azure portal https://portal.azure.com. In the search text box, type in "Kubernetes". From the result listed, click on Kubernetes Services to see the list of kubernetes services.


2) Click on the Add button to create a new kubernetes service. We are just going to use default options. We filled in details of your kubernetes cluster and click on the button Review + create button. 


Review the settings and click the Create button to create your AKS. It takes a few minutes to get a fully configured AKS to be set up.

Once the cluster is set up, you can view the kubernetes dashboard by clicking on "View Kubernetes dashboard" link and follow the steps in the page displayed. If you are not familiar with Kubernetes dashboard, it is a web based interface that displays all kubernetes resources.


Create a Kubernetes Secret


To enable our Kubernetes cluster to download images from a private DockerHub repository, we will set up a Kubernetes secret containing credentials for our DockerHub repo. To do this, click on the ">_" button to open a Cloud Shell session.

To create the secret run the following command


kubectl create secret docker-registry dockerhubcred --docker-server=https://index.docker.io/v1/ --docker-username=yourusernamee --docker-password=yourpassword --docker-email=youremailaddress


We can check the existence of the new secret by executing 

kubectl get secret

Create a Kubernetes Service


In Kubernetes, Containers are "housed" in Pods, However, pods are mortal and can be recreated at any time. Therefore, the end points of containers are exposed through an abstraction called "Kubernetes Service". 

Creation of kubernetes Service is a two stage process 

1) Create a Kubernetes Deployment: A deployment specifies a template, which include details of docker image, port, etc., replication details i.e. how many pods would be deployed as well as metadata that contains information about how pods would be selected.

2) Create a Kubernetes Service. A service is deployed by specifying the selection criteria, port to be exported and service type. 

To perform the above-mentioned steps, we created the following yaml file. 



apiVersion: v1
kind: Service
metadata:
  name: aspnetvuejs
  labels:
    app: asapnetvuejs
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: aspnetvuejs
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspnetvuejs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: private-reg-container
          image: aspnetvuejs/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: docker-registry


To run the yaml file, I executed the following series of command on my Cloud Shell session

1) Create the yaml file by executing 
vim oneservice.yaml
Copy the yaml content above and save the file.

2) Create the service and deployed by running
kubectl apply -f oneservice.yaml

The service and deployment is now created. To view your service, I typeed in the following
kubectl get service

The public IP address of the service is displayed in the list of service shown in the result. The deployed web web application can be viewed by typing in the public Ip address. 


Conclusion and next steps


In this post, we created a kubernetes service by declaring kubernetes objects in a yaml file and running them using Cloud Shell. This is useful in explaining the steps and understanding what is involved. However, in reality we would want to do it in a well-defined deployment process. In my next post, I will explain how to deploy to a kubernetes service using VSTS release management.

Friday 13 July 2018

Creating CI and Deployment builds for docker images

In my last blog post, I wrote about steps to create a Docker container for running aspnet/Javascript Services. In the post, we created a Docker file to create a Docker image with a website running creating on VueJs and Asp.Net core.

In real life, we would like our docker images to be pushed out to a container registry. We would also like it to be done through a team build. We would also want to have a CI build in place so every commit is vetted. This blog post details setting up the CI and deployment builds for producing docker images.

Dockerfile

To start with, lets review the docker file we created in the last post. I have modified it slightly to parameterized the exposed port

# Stage 1 - Restoring & Compiling
FROM microsoft/dotnet:2.1-sdk-alpine3.7 as builder
WORKDIR /source
RUN apk add --update nodejs nodejs-npm
COPY *.csproj .
RUN dotnet restore
COPY package.json .
RUN npm install
COPY . .
RUN dotnet publish -c Release -o /app/

# Stage 2 - Creating Image for compiled app
FROM microsoft/dotnet:2.1.1-aspnetcore-runtime-alpine3.7 as baseimage
ARG port=8080
RUN apk add --update nodejs nodejs-npm
WORKDIR /app
COPY --from=builder /app .
ENV ASPNETCORE_URLS=http://+:${port}

EXPOSE ${port}
CMD ["dotnet", "vue2spa.dll", "--server.urls", "http://+:${port}"]

The docker file creates a docker image that can be pulled and deployed in any application.

Setting up the CI Build


We will use the new YAML build feature in VSTS to setup our CI/CD build. Our YAML CI build file would have a single step that would attempt to build the docker image. The publishing of website is performed within the docker build process as described in the Dockerfile above.

Our very simple YAML file looks like following

name: $(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.rr)
steps:
  - script: |
      docker build --build-arg port=8080 --rm --compress -t sampleaspjs/web .
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Build Docker Image

We named the yaml file as .vsts-ci.yml. Commit it and push it out to our git repository.

Now that the yaml file is in our code repo, lets go and set up the CI build.
  1. From the Build Definitions page, click on the "+ New" button to create a build definition.
  2. Select the source to VSTS Git, select your Team Project, repository and the default branch of master 
  3. From the list of template, select option YAML in the Configuration as Code classification and click Apply
  4. In the build definition, type in the build name, select the "Hosted Linux Preview" queue and select the YAML path. Make sure Continuous Integration is enabled. Save the build definition.

Now that we have the CI build, lets turn out attention to create a deployment deployment build

Setting up the product build


The steps to create our product build is similar to that of CI build except that we will have a different YAML file. Our build process will go one step further. In addition to creating the docker image, it would also push out docker images to dockerhub

Our YAML file looks like following

name: $(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.rr)
steps:
  - script: |
      docker build --build-arg port=8080 --rm --compress -t tiresias/web .
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Build Docker Image

  - script: |
        docker tag aspnetvuejs/web tiresias/web:$(Build.BuildNumber)
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Tag docker image with buil d version

  - script: |
        docker login --username your-username --password your-password
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Docker Login

  - script: |
        docker push aspnetvuejs/web
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Docker Push

We named the yaml file as .vsts-build.yml. Commit it and push it out to our git repository.

The process to set up the deployment build is same as above.
  1. From the Build Definitions page, click on the "+ New" button to create a build definition.
  2. Select the source to VSTS Git, select your Team Project, repository and the default branch of master 
  3. From the list of template, select option YAML in the Configuration as Code classification and click Apply
  4. In the build definition, type in the build name, select the "Hosted Linux Preview" queue and select the YAML path. Make sure Continuous Integration is enabled. Save the build definition.
With the product build in place, we have a mechanism of creating docker images and pushing it out to a container registry.

Next steps

In this post, I described the steps to create a CI / build process for verifying, creating and distributing docker images. In my next post, I will deploy the docker images to a Kubernetes cluster created on Azure Kubernetes Service.