Wednesday 8 August 2018

The Curious Case of Shared Libraries in Microservice Architecture

Unless you have been living under a rock in the last few years, you would have heard of Microservices. If you don't quite know
Microservices are autonomous services with a bounded context, are fully self-contained and have their own release cadence. 
In other words, Microservices can be created, deployed and released independently. Their independence enables quick turnaround. Teams can release one version after another of an ever improving service without depending upon or worrying about anybody else. A microservice essentially own its own data store, CI/CD process, deployment and everything that goes with it. A team can choose its own technology stack and build something independent of other builds.

What about "code reuse"

Reusability has always been a fundamental principle in software engineering and one way of reusability is code reuse. Most often this is achieved using code libraries. In the last 20 years, I have seen an evolution of technology for creation and consumption of code libraries from C++ libraries to COM and ActiveX objects to JAR files and .NET libraries to NuGet packages and Node modules. Regardless of technology, common libraries has always been around and using them has been a desirable thing.

However, the general default advise in a Microservice architecture is
"Don’t share common libraries between Microservices." 
Why? Because, common libraries creates coupling.

As an example, consider a team let's say "Team A" created a brilliant library to make web requests when they created "Microservice A". Another team "Team B" started creating "Microservice B" and decided to use library created by "Team A" to make web requests. All good so far. In the 2nd iteration, they realized that they needed to enhance the web request library. Now, they would either need Team A to update it for them or do it for them. If they change the library, it would have an impact on "Microservice A" as well. This is undesirable.

Is Shared Libraries between Microservices a definite No-no?

Well… there are differing opinions on this. There is an apparent conflict between the Don't Repeat Yourself (DRY) and Loose Coupling principle. Some pundits argue overwhelmingly not to use any shared libraries while others consider it a bad advice. There is plenty of discussion on the topic to sway you one way or another.

Here is my take on it 

Using shared libraries between Microservices is fine as long as the following conditions are met:

1. Not Domain Specific:
The common library should NOT contain any domain specific code. In other words, it is Ok, if you are using code libraries to reuse technical code e.g. a common code to make http requests. If you think about it, we do it already by using platform libraries for doing things like database connectivity, file I/O, etc. However, if your shared library contains some business logic than it will cause problems. Remember microservices use bounded context.

2. Use Package Manager:
The shared libraries must be published and consumed using a package manager. Package managers such as NPM or NuGet ensures that dependencies are on a particular version of library. It means that when a later newer of library is published, the consumer service don't need to update unless it wants to use functionality in the newer version.

3. Small library with Single Responsibility:
Over years I have seen the menace of "core" or "common" libraries where any reusable code segment is put in a single common library. The problem with that is that the common library touches far too many points and is always required to be updated. For shared libraries between microservices, the shared libraries must be small and do a single job.

4. Avoid Transitive Dependencies:
If your common library is dependent upon another library, which in turn is dependent upon another library, it would create dependencies, which are difficult to map and maintain. In this case, I would say that it's better to duplicate the code than to create a dependency hell of using inter-dependent libraries.

Ownership Dilemma

As I mentioned, microservices are fully autonomous. The team who create them own them and are responsible for everything about them from the choice of technology to deployment and release process. What about shared libraries? If "Team A" creates a shared library that is used by "Team B", which team would own the library? This is indeed a dilemma, especially in cases where the shared library needs to be modified. If Team B needs the change to be made, would they make the change or whether they would ask Team A to make the change. In either case, there is dependency between teams. 

From experience, my stance on it is that the team who writes the shared library owns the library. If they require change in the library for their service, they should make the change. However, if other teams want changes in the shared library, they should rather drop the dependency, duplicate the code and do it within the service (or create another library). I agree that it cause effort duplication but in my experience it works much better than modifying the library for a purpose that it wasn't written for in the first place.

Then there is Composite

Microservices do not have dependencies on other microservices. To provide business value, there would be something the brings the services together. It could be a web page for a web application, web form for a windows app or an API gateway for public web APIs. Regardless of its type, it acts as composite. How are shared libraries resolved in a composite. As an example, let's say that services A and B both use CommonLib but Service A uses a different version of CommonLib. The answer to this question is within the definition of micro-service itself. Microservices are self-contained and would bring in an deploy everything that needs to run them themselves. Even if Service A and B are merely separate processes on the same machine, they would load different versions of common library in their process space. The composite does not need to worry about internal intricacies of services.

And finally

As a disclaimer, this post is purely my own opinion based on my experience of working with microservices and in decomposing monoliths. If you follow these general guidance, you will find yourself with a large number of very small libraries, which are not changed very often. Both of these traits are good things. Also, from my experience it is easier to follow these rules if you are starting fresh. If you are starting with a huge monolith, you would most certainly start with a huge common library. Getting clarity on dependencies while working backwards need finer skills. 

Wednesday 25 July 2018

Using Helm to deploy to a kubernetes cluster pulling images from a private container registry

Background


Kubernetes is a great platform for deploying containerized applications. A typical production system makes use of a number of docker images. To run and monitor these images manually is complicated. Kubernetes takes care of this task. It starts up given number of containers, monitors the health of each container, creates new containers for each failing container and has a mechanism of managing secrets and configuration maps that can be used by container images. 

Deployment to a Kubernetes cluster can be done by creating each k8s object manually using KubeCtl command line. This is of course not a feasible option for any production deployment. 

An alternate is to define all configuration in code by declaring them one or more YAML files. This is a better approach, however, it would still require that all kubernetes objects are created in order and doesn't scale well for larger of kubernetes objects.

Enter Helm. It is is a package manager for kubernetes and provides functionality to define, install, update and delete all kubernetes objects in your application with a single command.

This this post, I will explain how to deploy a simple kubernetes application. I will use docker images that reside in a private dockerhub repository, To authenticate with dockerhub, my credentials will be stored in kubernetes secret.


Example Application Blueprint


We will deploy simple Node-Js web application called tiresias. It's stored in a private Dockerhub repository. Our deployment will include 

1) A Kubernetes deployment with a replica-set of 2 (two instances of our containers will be running). 

2) In addition, it would include Kubernetes Service to provide a single endpoint, and 

3) An Ingress object to allow external traffic to hit the Service endpoint. 

Our YAML file look as below

apiVersion: v1
kind: Service
metadata:
  name: tiresias
  labels:
    app: tiresias
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: tiresias
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tiresias
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: tiresias
    spec:
      containers:
        - name: private-reg-container
          image: tiresias/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: regcred
  selector:
    matchLabels:
      app: tiresias
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tiresius
spec:
  backend:
    serviceName: tiresias
    servicePort: 80

Before running the above yaml with KubeCtl apply -f command, we need to create the secret named regcred. Now this is rather simple application but the dependency on secret is an indication on how dependencies between kubernetes objects make working only with "kubectl apply -f" a nightmare.


Creating our Helm Chart

We create a new Helm chart called "auto-deploy" by executing 

helm create auto-deploy

This creates a new "auto-deploy" folder with the following contents



Lets discuss each file and folder
  • Chart.yaml - Contain information about Helm chart such as Api Version, Name, Description, etc. 
  • values.yaml - Contains values of each of the variables that are used in template
  • Charts/ - Contains charts onto which the current chart is dependent upon.
  • templates/ Contains templates for deployment, which along with values.yaml file, generates Kubernetes configuration.
We delete the contents of the templates directory to create our objects from scratch.

Charts.yaml

Before we start creating templates, lets set the information of our chart by setting the content to following text

apiVersion: v1
appVersion: "1.0"
description: A Helm Chart for Tiresias
name: auto-deploy
version: 0.1.0

values.yaml

For our simplistic application, the only variables we need are related to authentication with our private repository in dockerhub. The contents of our values.yaml file is shown below

imageCredentials:
  name: dockerhub
  registry: https://index.docker.io/v1/
  username: TODO
  password: TODO

We have four items in the file. We set the name, image registry URL, user name and password. We do not want to put our username / password in the text file, so the variables are just a place holder. We will pass the values as command line arguments.


Working with Secret


To be able to download image from our private image repo, we would need to set up a secret of type kubernetes.io/dockerconfigjson. 

Kubernetes stores the secrets in a base64 encoded string, so we would need base64 encode our username / password. We will write a function for it. 

We create two files in the templates directory

1) secret.yaml
2) imagePullSecret.yaml


secret.yaml

The contents of secret.yaml is as below

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Values.imageCredentials.name }}
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: {{ template "imagePullSecret" . }}

Lets dissect the contents a bit. We are creating a simple Secret object. The name is set using to the value of the variable imageCredentials.name. The data is set using the template imagePullSecret, which is detailed as below.

imagePullSecret.yaml

In the the template file imagePullSecret.yaml, we define a function that spits out the base64 encoded string for username and password. The contents of the file is the text below

{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" 
.Values.imageCredentials.registry (printf "%s:%s" 
.Values.imageCredentials.username 
.Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}

The method is using values from variables imageCredentials.registry, imageCredentials.username and imageCredentials.password and using b64enc method to generate a base64 encoded string and printing it out.


Deployment, Service and Ingress


Now that secret is configured correctly, the next task is to set up our other three kubernetes objects i.e. Deployment, Service and Ingress. The definition for our objects is same as defined in the YAML file above except that we break it down in three different yaml files

deployment.yaml

The content of deployment.yaml file is below. Note the use of variable to reference the name of image pull secret.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tiresias
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tiresias
  template:
    metadata:
      labels:
        app: tiresias
    spec:
      containers:
        - name: tiresias-dockerhub
          image: tiresias/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: {{ .Values.imageCredentials.name }}

service.yaml

The content of service.yaml file is as below.
apiVersion: v1
kind: Service
metadata:
  name: tiresias-svc
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: tiresias

ingress.yaml

The content of ingress.yaml file is as below. 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tiresius
  annotation:
    http.port: "443"
spec:
  backend:
    serviceName: tiresias-svc
    servicePort: 80


Executing our Helm chart

Using helm to deploy kubernetes objects simply involves executing the helm install command. Since, we have a couple of variables where values need to be provided using command line arguments. Our command to execute our chart would look like below
helm install ./auto-deploy --set imageCredentials.username=youruser,imageCredentials.password=yourpassword
As you can see, it is a breeze to work with. The output looks some like as follow 
NAME:   veering-panther
LAST DEPLOYED: Wed Jul 25 12:32:22 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME       TYPE                            DATA  AGE
dockerhub  kubernetes.io/dockerconfigjson  1     0s

==> v1/Service
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
tiresias-svc  ClusterIP  10.106.163.221         80/TCP   0s

==> v1/Deployment
NAME      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
tiresias  2        0        0           0          0s

==> v1beta1/Ingress
NAME      HOSTS  ADDRESS  PORTS  AGE
tiresius  *      80       0s

==> v1/Pod(related)
NAME                       READY  STATUS   RESTARTS  AGE
tiresias-646dbdc7b9-b9p7z  0/1    Pending  0         0s
tiresias-646dbdc7b9-h8w2k  0/1    Pending  0         0s


You can see that a couple of kubernetes pods are set up and are in pending state right now. If you want to see current installation, you can run the following command

helm list
The output looks like following
NAME            REVISION        UPDATED                         STATUS          CHART               NAMESPACEveering-panther 1               Wed Jul 25 12:32:22 2018        DEPLOYED        auto-deploy-0.1.0   default

Note the name given by helm to your installation. If you want to delete all kubernetes objects, the simple helm command to do it is 

helm delete veering-panther

Conclusion


In this post, I explained using Helm to deploy a simple containerized application. For any production application of any size, using a simple helm command is much simpler and easier than running a series of kubectl commands. I hope you find this post useful. Please feel free to leave your comments / feedback.

Tuesday 17 July 2018

Using VSTS Release Management to create a Continuous Delivery pipeline for a Kubernetes service

Introduction


My last blog post was about deploying my dockerized web application to Azure Kubernetes Service. In that post, I explained how to manually deploy docker images to a Kubernetes cluster on AKS. Though deploying manually might be a fulfilling experience, it is hardly something that we would do for an actual production system. Rather, we would want to create a deployment pipeline that automates the entire process.

In this blog post, we would create a Release using VSTS Release Management and a deployment pipeline for deploying to a Kubernetes cluster on AKS.


Creating a Release


The first step is to create a new release. From the "Build and release" menu on your VSTS project, click on the tab "Releases". Click the + button and select "Create release definition".



We are taken to the release definition page. From the template, select "Deploy to a Kubernetes Cluster".


For the names, we named our environment "dev" and gave our release the name of "DevRelease". Our deployment pipeline at this stage looks like a blank slate as shown below


Selecting Artifacts


The input to our deployment pipeline are two artifacts

1) The docker image of our web application. 

This is produced by our VSTS builds and described in the post "Creating CI and Deployment builds for docker images". Our build pushed our images out to docker hub, so we will set up our pipeline to download it from there.

2) A YAML file defining our Kubernetes service. 

Our YAML file looks like following and is already described in the post "Deploying your dockerized application to Azure Kubernetes Service". We saved it in a oneservice.yaml file and committed it to our source git repository.

apiVersion: v1
kind: Service
metadata:
  name: aspnetvuejs
  labels:
    app: asapnetvuejs
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: aspnetvuejs
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspnetvuejs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: private-reg-container
          image: aspnetvuejs/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: docker-registry


To select the docker image artifact, click on the "+ Add" button in the Artifacts section. From the list, click option "Docker Hub".




From the list of options, select the service endpoint for your Docker Hub repository. If it's not already set up, click on the Manage button to select it.  Select the Namespace and Repository and give your source an alias. Click the Add button add your artifact.

Your application is how shown in list of Artifacts.

Now, we need to add the YAML flie containing definition of our service. To do this, click the "+ Add" button in the Artifacts section again. This time, select option "Git".  From the options displayed, select your team project and the Git repository. For branch, we selected branch "master" and for version selected latest for default version. Our final selection for Git looks as following



Click on the Add button to add the artifact. Now both the input artifact to our deployment pipeline is setup.


Setting Deployment


The next step is to configure our deployment process. Since we selected template "Deploy to a Kubernetes cluster", we already have a task setup in our deployment process as shown by the "1 phase, 1 task" link. Click on the link to view details of the deployment process. We have one task in place to run kubectl apply.



For the task arguments, we need to provide a Kubernetes Service connection. To do this, we clicked on the + New button



1) We give our connection the name "dev".

2) Set the Server URL. You can find it in the Azure portal by visiting the properties page of your Kubernetes service on Azure portal


3) To get your KubeConfig details, type in the following on your Cloud shell session

az aks get-credentials --resource-group kubedemo --name kubedemo
Your configurations are copied to the .kube/config file. Copy the contents of the file and past it in your connection string.

4) Select option "Accept Untrusted Certificate".
5) Click on Verify Connection to verify your connection. The deployment task is now ready to deploy to our Kubernetes cluster.


We will keep our kubectl command as apply and select the "Use configuration files" checkbox so that we can pass the YAML file from our Git repository.

For the Configuration file option, click on the ellipses button. From the Linked Artifacts, select the Git Artifact and select the YAML file we pushed out to our Git source code repository earlier. Click OK and our deployment process is ready for action.

The final and most important step is to Save our build release definition by clicking on the save button.


Running Deployment


At this stage, we have setup a build but it hasn't been deployed to an environment yet. To start a deployment, go back to the Release definitions view by clicking on Release link. Our new release is shown in the list, To deploy, we click on the ellipses with the release and click on "+ Release". A dialog appears asking about target environment and version for our artifacts. Selected the environment "dev" and the latest commit on our git repository. Click on the Create button to deploy to our Kubernetes Service.




Conclusion

This is the final post of my series of posts about creating docker images and deploy them to a Kubernetes cluster in Azure Cluster Service. In the series of posts, I explained

Monday 16 July 2018

Deploying your dockerized application to Azure Kubernetes Service

Introduction


My last two blog posts were about creating docker images for an ASPNet/Javascript web application. In the first post, I described considerations to produce an optimized image. The second post was about creating a CI/CD process for producing docker images.

Though producing nice lean docker images is good karma, they need to be deployed to a container orchestration system to be deployed in a productionized environment.

The three most popular container orchestration system are Kubernetes, Mesosphere and Docker Swarm. Of these three, Kubernetes is arguably the most popular and we are going to use it to run our container. Running your own kubernetes cluster in a production environment is ostensible. Luckily all major cloud vendor provide kubernetes as a service e.g. Google Cloud has a Google Kubernetes Engine,  Amazon AWS provides the Elastic Container Service for Kubernetes (Amazon EKS) and Microsoft Azure has the Azure Kubernetes Service (AKS). We are going to use Azure Kubernetes Service to run our containers.

Provisioning a Azure Kubernetes Service


1) Log on to the azure portal https://portal.azure.com. In the search text box, type in "Kubernetes". From the result listed, click on Kubernetes Services to see the list of kubernetes services.


2) Click on the Add button to create a new kubernetes service. We are just going to use default options. We filled in details of your kubernetes cluster and click on the button Review + create button. 


Review the settings and click the Create button to create your AKS. It takes a few minutes to get a fully configured AKS to be set up.

Once the cluster is set up, you can view the kubernetes dashboard by clicking on "View Kubernetes dashboard" link and follow the steps in the page displayed. If you are not familiar with Kubernetes dashboard, it is a web based interface that displays all kubernetes resources.


Create a Kubernetes Secret


To enable our Kubernetes cluster to download images from a private DockerHub repository, we will set up a Kubernetes secret containing credentials for our DockerHub repo. To do this, click on the ">_" button to open a Cloud Shell session.

To create the secret run the following command


kubectl create secret docker-registry dockerhubcred --docker-server=https://index.docker.io/v1/ --docker-username=yourusernamee --docker-password=yourpassword --docker-email=youremailaddress


We can check the existence of the new secret by executing 

kubectl get secret

Create a Kubernetes Service


In Kubernetes, Containers are "housed" in Pods, However, pods are mortal and can be recreated at any time. Therefore, the end points of containers are exposed through an abstraction called "Kubernetes Service". 

Creation of kubernetes Service is a two stage process 

1) Create a Kubernetes Deployment: A deployment specifies a template, which include details of docker image, port, etc., replication details i.e. how many pods would be deployed as well as metadata that contains information about how pods would be selected.

2) Create a Kubernetes Service. A service is deployed by specifying the selection criteria, port to be exported and service type. 

To perform the above-mentioned steps, we created the following yaml file. 



apiVersion: v1
kind: Service
metadata:
  name: aspnetvuejs
  labels:
    app: asapnetvuejs
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: aspnetvuejs
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspnetvuejs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: private-reg-container
          image: aspnetvuejs/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: docker-registry


To run the yaml file, I executed the following series of command on my Cloud Shell session

1) Create the yaml file by executing 
vim oneservice.yaml
Copy the yaml content above and save the file.

2) Create the service and deployed by running
kubectl apply -f oneservice.yaml

The service and deployment is now created. To view your service, I typeed in the following
kubectl get service

The public IP address of the service is displayed in the list of service shown in the result. The deployed web web application can be viewed by typing in the public Ip address. 


Conclusion and next steps


In this post, we created a kubernetes service by declaring kubernetes objects in a yaml file and running them using Cloud Shell. This is useful in explaining the steps and understanding what is involved. However, in reality we would want to do it in a well-defined deployment process. In my next post, I will explain how to deploy to a kubernetes service using VSTS release management.

Friday 13 July 2018

Creating CI and Deployment builds for docker images

In my last blog post, I wrote about steps to create a Docker container for running aspnet/Javascript Services. In the post, we created a Docker file to create a Docker image with a website running creating on VueJs and Asp.Net core.

In real life, we would like our docker images to be pushed out to a container registry. We would also like it to be done through a team build. We would also want to have a CI build in place so every commit is vetted. This blog post details setting up the CI and deployment builds for producing docker images.

Dockerfile

To start with, lets review the docker file we created in the last post. I have modified it slightly to parameterized the exposed port

# Stage 1 - Restoring & Compiling
FROM microsoft/dotnet:2.1-sdk-alpine3.7 as builder
WORKDIR /source
RUN apk add --update nodejs nodejs-npm
COPY *.csproj .
RUN dotnet restore
COPY package.json .
RUN npm install
COPY . .
RUN dotnet publish -c Release -o /app/

# Stage 2 - Creating Image for compiled app
FROM microsoft/dotnet:2.1.1-aspnetcore-runtime-alpine3.7 as baseimage
ARG port=8080
RUN apk add --update nodejs nodejs-npm
WORKDIR /app
COPY --from=builder /app .
ENV ASPNETCORE_URLS=http://+:${port}

EXPOSE ${port}
CMD ["dotnet", "vue2spa.dll", "--server.urls", "http://+:${port}"]

The docker file creates a docker image that can be pulled and deployed in any application.

Setting up the CI Build


We will use the new YAML build feature in VSTS to setup our CI/CD build. Our YAML CI build file would have a single step that would attempt to build the docker image. The publishing of website is performed within the docker build process as described in the Dockerfile above.

Our very simple YAML file looks like following

name: $(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.rr)
steps:
  - script: |
      docker build --build-arg port=8080 --rm --compress -t sampleaspjs/web .
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Build Docker Image

We named the yaml file as .vsts-ci.yml. Commit it and push it out to our git repository.

Now that the yaml file is in our code repo, lets go and set up the CI build.
  1. From the Build Definitions page, click on the "+ New" button to create a build definition.
  2. Select the source to VSTS Git, select your Team Project, repository and the default branch of master 
  3. From the list of template, select option YAML in the Configuration as Code classification and click Apply
  4. In the build definition, type in the build name, select the "Hosted Linux Preview" queue and select the YAML path. Make sure Continuous Integration is enabled. Save the build definition.

Now that we have the CI build, lets turn out attention to create a deployment deployment build

Setting up the product build


The steps to create our product build is similar to that of CI build except that we will have a different YAML file. Our build process will go one step further. In addition to creating the docker image, it would also push out docker images to dockerhub

Our YAML file looks like following

name: $(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.rr)
steps:
  - script: |
      docker build --build-arg port=8080 --rm --compress -t tiresias/web .
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Build Docker Image

  - script: |
        docker tag aspnetvuejs/web tiresias/web:$(Build.BuildNumber)
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Tag docker image with buil d version

  - script: |
        docker login --username your-username --password your-password
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Docker Login

  - script: |
        docker push aspnetvuejs/web
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Docker Push

We named the yaml file as .vsts-build.yml. Commit it and push it out to our git repository.

The process to set up the deployment build is same as above.
  1. From the Build Definitions page, click on the "+ New" button to create a build definition.
  2. Select the source to VSTS Git, select your Team Project, repository and the default branch of master 
  3. From the list of template, select option YAML in the Configuration as Code classification and click Apply
  4. In the build definition, type in the build name, select the "Hosted Linux Preview" queue and select the YAML path. Make sure Continuous Integration is enabled. Save the build definition.
With the product build in place, we have a mechanism of creating docker images and pushing it out to a container registry.

Next steps

In this post, I described the steps to create a CI / build process for verifying, creating and distributing docker images. In my next post, I will deploy the docker images to a Kubernetes cluster created on Azure Kubernetes Service.


Saturday 23 June 2018

Running aspnet/JavaScriptServices on Docker

The aspnet/JavaScriptServices project is part of ASP.NET Core and allows you to use Modern Web App frameworks such as Angular, React, etc. for creating web user interfaces of your application, alongside using ASP.NET Core for your web services. If you are using VueJS, the project aspnetcore-Vue-starter does the same thing.

Since, it uses both node.js and .net core, any server hosting the website would need to have both the technologies installed on it. In addition to that, your application might be using node modules and nuget packages, which would need to be deployed on to the server. 

Creating Docker Image

When deploying your application in a docker container, typically you start with a base image. For instance, when deploying your .Net core application, your docker file would start with an instruction like 
FROM microsoft/aspnetcore

But if you are creating a node base application, your docker would start with an instruction like 
FROM node

So, what images should be used for the projects that use both node.js and .net core. The answer is to take one image and install the other library on it. We start with looking at the size of the image. The microsoft/aspnetcore is 345 MB and node:latest is 674MB. 


Both are pretty huge right? So, the first step is to find the slimmer versions of the containers which can be used for production. I found out that there is an alpine image of node, which is about 70 MB that we can use as a baseline and installed .net core on top of it.


We could do that but installing .net core on ubuntu takes far too many commands as compared to installing node.js on ubuntu. So an alternative would be to find an alpine image for dotnet core and install node on it. So, I look and yes there is certainly an alpine image for dotnet core and at about 162MB in size is just right for us.



This would do the trick. So our dockerfile to deploy node on top of dotnet would look like this 


FROM microsoft/dotnet:2.1.1-aspnetcore-runtime-alpine3.7 as baseimage
RUN apk add --update nodejs nodejs-npm

I actually added multi-stage build process in our docker file so that the compilation the application is compiled in it as well and a image produced at the end of the compilation process. The entire Dockerfile for the project produced by aspnetcore-Vue-starter project template looks follows


# Stage 1 - Restoring & Compiling
FROM microsoft/dotnet:2.1-sdk-alpine3.7 as builder
WORKDIR /source
RUN apk add --update nodejs nodejs-npm
COPY *.csproj .
RUN dotnet restore
COPY package.json .
RUN npm install
COPY . .
RUN dotnet publish -c Release -o /app/

# Stage 2 - Creating Image for compiled app
FROM microsoft/dotnet:2.1.1-aspnetcore-runtime-alpine3.7 as baseimage
RUN apk add --update nodejs nodejs-npm
WORKDIR /app
COPY --from=builder /app .
ENV ASPNETCORE_URLS=http://+:$port

# Run the application. REPLACE the name of dll with the name of the dll produced by your application
EXPOSE $port
CMD ["dotnet", "vue2spa.dll"]

PS: In case, you want to use a base image with .net core 2.1 and node already on it, you can use the base image https://hub.docker.com/r/hamidshahid/aspnetcore-plus-node/. Simply pull it by calling 


docker pull hamidshahid/aspnetcore-plus-node

Thursday 29 March 2018

VSTS / TFS 2018 Viewing test run history for a given test case

UPDATE:
Divya Vaishyani from Visual Studio Team Services team has rightly pointed out that it is possible to view test result history for a test case using the Test results pane as documented here. However, the test results from across Test Plans. This is quite confusing and different than the test history that showed. The workaround in this post allows you to view history for the same test plan


With the release of TFS 2018, running automated tests from Microsoft Test Manager (MTM) isn't supported any more. (see TFS 2018 release notes) doesn't support MTM to run automated tests from. This was announced in VSTS and TFS road map about two years ago. 

The test planning and management features in TFS / VSTS are pretty cool. However, there is one feature that I feel is rather missing and that is ability to view the history of a particular test case. In MTM, you could just click on the "View Results" link on a Test Case and view previous results. However, in VSTS, it is not possible to view test case's run history. There is a feature request in the user voice for it. Do remember to add your vote for it!!

1) In TFS, click on "Test" from the top menu and select the test suite where your test case is. Select the test case that you are interested in. Then Click pass or fail button. This will generate a manual test run for the given test.

Trigger a Manual Run for your Test Case

2) Go to MTM --> Test --> Analyze Test Run. Select option "Manual Runs" in the View option
The good news is that there is a workaround - using MTM - for you to view history of test runs for a particular test case.


Find the Manaul Test Run in MTM

3) Open the test run. Right click on test and click "View Results".

View Test Run in MTM

4) The list of results will show you the manual run as well as automated runs, which is what you are really looking for.

View Test Results

It's still a workaround and you still need MTM but you can see the history of test cases this way. I hope you find this post useful.