Background
Kubernetes is a great platform for deploying containerized applications. A typical production system makes use of a number of docker images. To run and monitor these images manually is complicated. Kubernetes takes care of this task. It starts up given number of containers, monitors the health of each container, creates new containers for each failing container and has a mechanism of managing secrets and configuration maps that can be used by container images.
Deployment to a Kubernetes cluster can be done by creating each k8s object manually using KubeCtl command line. This is of course not a feasible option for any production deployment.
An alternate is to define all configuration in code by declaring them one or more YAML files. This is a better approach, however, it would still require that all kubernetes objects are created in order and doesn't scale well for larger of kubernetes objects.
Enter Helm. It is is a package manager for kubernetes and provides functionality to define, install, update and delete all kubernetes objects in your application with a single command.
This this post, I will explain how to deploy a simple kubernetes application. I will use docker images that reside in a private dockerhub repository, To authenticate with dockerhub, my credentials will be stored in kubernetes secret.
Example Application Blueprint
We will deploy simple Node-Js web application called tiresias. It's stored in a private Dockerhub repository. Our deployment will include
1) A Kubernetes deployment with a replica-set of 2 (two instances of our containers will be running).
2) In addition, it would include Kubernetes Service to provide a single endpoint, and
3) An Ingress object to allow external traffic to hit the Service endpoint.
Our YAML file look as below
apiVersion: v1
kind: Service
metadata:
name: tiresias
labels:
app: tiresias
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: port-80
selector:
app: tiresias
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tiresias
spec:
replicas: 2
template:
metadata:
labels:
app: tiresias
spec:
containers:
- name: private-reg-container
image: tiresias/web:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
selector:
matchLabels:
app: tiresias
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tiresius
spec:
backend:
serviceName: tiresias
servicePort: 80
Before running the above yaml with KubeCtl apply -f command, we need to create the secret named regcred. Now this is rather simple application but the dependency on secret is an indication on how dependencies between kubernetes objects make working only with "kubectl apply -f" a nightmare.
Creating our Helm Chart
We create a new Helm chart called "auto-deploy" by executing
This creates a new "auto-deploy" folder with the following contents
Lets discuss each file and folder
helm create auto-deploy
This creates a new "auto-deploy" folder with the following contents
Lets discuss each file and folder
- Chart.yaml - Contain information about Helm chart such as Api Version, Name, Description, etc.
- values.yaml - Contains values of each of the variables that are used in template
- Charts/ - Contains charts onto which the current chart is dependent upon.
- templates/ Contains templates for deployment, which along with values.yaml file, generates Kubernetes configuration.
We delete the contents of the templates directory to create our objects from scratch.
Charts.yaml
Before we start creating templates, lets set the information of our chart by setting the content to following text
apiVersion: v1
appVersion: "1.0"
description: A Helm Chart for Tiresias
name: auto-deploy
version: 0.1.0
values.yaml
For our simplistic application, the only variables we need are related to authentication with our private repository in dockerhub. The contents of our values.yaml file is shown below
imageCredentials:
name: dockerhub
registry: https://index.docker.io/v1/
username: TODO
password: TODO
We have four items in the file. We set the name, image registry URL, user name and password. We do not want to put our username / password in the text file, so the variables are just a place holder. We will pass the values as command line arguments.
1) secret.yaml
2) imagePullSecret.yaml
Working with Secret
To be able to download image from our private image repo, we would need to set up a secret of type kubernetes.io/dockerconfigjson.
Kubernetes stores the secrets in a base64 encoded string, so we would need base64 encode our username / password. We will write a function for it.
We create two files in the templates directory
2) imagePullSecret.yaml
secret.yaml
The contents of secret.yaml is as below
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.imageCredentials.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
Lets dissect the contents a bit. We are creating a simple Secret object. The name is set using to the value of the variable imageCredentials.name. The data is set using the template imagePullSecret, which is detailed as below.
imagePullSecret.yaml
In the the template file imagePullSecret.yaml, we define a function that spits out the base64 encoded string for username and password. The contents of the file is the text below
The output looks like following
Note the name given by helm to your installation. If you want to delete all kubernetes objects, the simple helm command to do it is
In this post, I explained using Helm to deploy a simple containerized application. For any production application of any size, using a simple helm command is much simpler and easier than running a series of kubectl commands. I hope you find this post useful. Please feel free to leave your comments / feedback.
{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}"
.Values.imageCredentials.registry (printf "%s:%s"
.Values.imageCredentials.username
.Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}
The method is using values from variables imageCredentials.registry, imageCredentials.username and imageCredentials.password and using b64enc method to generate a base64 encoded string and printing it out.
Deployment, Service and Ingress
Now that secret is configured correctly, the next task is to set up our other three kubernetes objects i.e. Deployment, Service and Ingress. The definition for our objects is same as defined in the YAML file above except that we break it down in three different yaml files
deployment.yaml
The content of deployment.yaml file is below. Note the use of variable to reference the name of image pull secret.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tiresias
spec:
replicas: 2
selector:
matchLabels:
app: tiresias
template:
metadata:
labels:
app: tiresias
spec:
containers:
- name: tiresias-dockerhub
image: tiresias/web:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: {{ .Values.imageCredentials.name }}
service.yaml
The content of service.yaml file is as below.
apiVersion: v1
kind: Service
metadata:
name: tiresias-svc
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: port-80
selector:
app: tiresias
ingress.yaml
The content of ingress.yaml file is as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tiresius
annotation:
http.port: "443"
spec:
backend:
serviceName: tiresias-svc
servicePort: 80
Executing our Helm chart
Using helm to deploy kubernetes objects simply involves executing the helm install command. Since, we have a couple of variables where values need to be provided using command line arguments. Our command to execute our chart would look like below
You can see that a couple of kubernetes pods are set up and are in pending state right now. If you want to see current installation, you can run the following command
helm install ./auto-deploy --set imageCredentials.username=youruserAs you can see, it is a breeze to work with. The output looks some like as follow,imageCredentials.password=yourpassword
NAME: veering-panther
LAST DEPLOYED: Wed Jul 25 12:32:22 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
dockerhub kubernetes.io/dockerconfigjson 1 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tiresias-svc ClusterIP 10.106.163.221 80/TCP 0s
==> v1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tiresias 2 0 0 0 0s
==> v1beta1/Ingress
NAME HOSTS ADDRESS PORTS AGE
tiresius * 80 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
tiresias-646dbdc7b9-b9p7z 0/1 Pending 0 0s
tiresias-646dbdc7b9-h8w2k 0/1 Pending 0 0s
helm list
NAME REVISION UPDATED STATUS CHART NAMESPACEveering-panther 1 Wed Jul 25 12:32:22 2018 DEPLOYED auto-deploy-0.1.0 default
Note the name given by helm to your installation. If you want to delete all kubernetes objects, the simple helm command to do it is
helm delete veering-panther
Conclusion
In this post, I explained using Helm to deploy a simple containerized application. For any production application of any size, using a simple helm command is much simpler and easier than running a series of kubectl commands. I hope you find this post useful. Please feel free to leave your comments / feedback.