Showing posts with label VSTS. Show all posts
Showing posts with label VSTS. Show all posts

Monday, 16 July 2018

Deploying your dockerized application to Azure Kubernetes Service

Introduction


My last two blog posts were about creating docker images for an ASPNet/Javascript web application. In the first post, I described considerations to produce an optimized image. The second post was about creating a CI/CD process for producing docker images.

Though producing nice lean docker images is good karma, they need to be deployed to a container orchestration system to be deployed in a productionized environment.

The three most popular container orchestration system are Kubernetes, Mesosphere and Docker Swarm. Of these three, Kubernetes is arguably the most popular and we are going to use it to run our container. Running your own kubernetes cluster in a production environment is ostensible. Luckily all major cloud vendor provide kubernetes as a service e.g. Google Cloud has a Google Kubernetes Engine,  Amazon AWS provides the Elastic Container Service for Kubernetes (Amazon EKS) and Microsoft Azure has the Azure Kubernetes Service (AKS). We are going to use Azure Kubernetes Service to run our containers.

Provisioning a Azure Kubernetes Service


1) Log on to the azure portal https://portal.azure.com. In the search text box, type in "Kubernetes". From the result listed, click on Kubernetes Services to see the list of kubernetes services.


2) Click on the Add button to create a new kubernetes service. We are just going to use default options. We filled in details of your kubernetes cluster and click on the button Review + create button. 


Review the settings and click the Create button to create your AKS. It takes a few minutes to get a fully configured AKS to be set up.

Once the cluster is set up, you can view the kubernetes dashboard by clicking on "View Kubernetes dashboard" link and follow the steps in the page displayed. If you are not familiar with Kubernetes dashboard, it is a web based interface that displays all kubernetes resources.


Create a Kubernetes Secret


To enable our Kubernetes cluster to download images from a private DockerHub repository, we will set up a Kubernetes secret containing credentials for our DockerHub repo. To do this, click on the ">_" button to open a Cloud Shell session.

To create the secret run the following command


kubectl create secret docker-registry dockerhubcred --docker-server=https://index.docker.io/v1/ --docker-username=yourusernamee --docker-password=yourpassword --docker-email=youremailaddress


We can check the existence of the new secret by executing 

kubectl get secret

Create a Kubernetes Service


In Kubernetes, Containers are "housed" in Pods, However, pods are mortal and can be recreated at any time. Therefore, the end points of containers are exposed through an abstraction called "Kubernetes Service". 

Creation of kubernetes Service is a two stage process 

1) Create a Kubernetes Deployment: A deployment specifies a template, which include details of docker image, port, etc., replication details i.e. how many pods would be deployed as well as metadata that contains information about how pods would be selected.

2) Create a Kubernetes Service. A service is deployed by specifying the selection criteria, port to be exported and service type. 

To perform the above-mentioned steps, we created the following yaml file. 



apiVersion: v1
kind: Service
metadata:
  name: aspnetvuejs
  labels:
    app: asapnetvuejs
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: port-80
  selector:
    app: aspnetvuejs
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aspnetvuejs
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: private-reg-container
          image: aspnetvuejs/web:latest
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: docker-registry


To run the yaml file, I executed the following series of command on my Cloud Shell session

1) Create the yaml file by executing 
vim oneservice.yaml
Copy the yaml content above and save the file.

2) Create the service and deployed by running
kubectl apply -f oneservice.yaml

The service and deployment is now created. To view your service, I typeed in the following
kubectl get service

The public IP address of the service is displayed in the list of service shown in the result. The deployed web web application can be viewed by typing in the public Ip address. 


Conclusion and next steps


In this post, we created a kubernetes service by declaring kubernetes objects in a yaml file and running them using Cloud Shell. This is useful in explaining the steps and understanding what is involved. However, in reality we would want to do it in a well-defined deployment process. In my next post, I will explain how to deploy to a kubernetes service using VSTS release management.

Friday, 13 July 2018

Creating CI and Deployment builds for docker images

In my last blog post, I wrote about steps to create a Docker container for running aspnet/Javascript Services. In the post, we created a Docker file to create a Docker image with a website running creating on VueJs and Asp.Net core.

In real life, we would like our docker images to be pushed out to a container registry. We would also like it to be done through a team build. We would also want to have a CI build in place so every commit is vetted. This blog post details setting up the CI and deployment builds for producing docker images.

Dockerfile

To start with, lets review the docker file we created in the last post. I have modified it slightly to parameterized the exposed port

# Stage 1 - Restoring & Compiling
FROM microsoft/dotnet:2.1-sdk-alpine3.7 as builder
WORKDIR /source
RUN apk add --update nodejs nodejs-npm
COPY *.csproj .
RUN dotnet restore
COPY package.json .
RUN npm install
COPY . .
RUN dotnet publish -c Release -o /app/

# Stage 2 - Creating Image for compiled app
FROM microsoft/dotnet:2.1.1-aspnetcore-runtime-alpine3.7 as baseimage
ARG port=8080
RUN apk add --update nodejs nodejs-npm
WORKDIR /app
COPY --from=builder /app .
ENV ASPNETCORE_URLS=http://+:${port}

EXPOSE ${port}
CMD ["dotnet", "vue2spa.dll", "--server.urls", "http://+:${port}"]

The docker file creates a docker image that can be pulled and deployed in any application.

Setting up the CI Build


We will use the new YAML build feature in VSTS to setup our CI/CD build. Our YAML CI build file would have a single step that would attempt to build the docker image. The publishing of website is performed within the docker build process as described in the Dockerfile above.

Our very simple YAML file looks like following

name: $(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.rr)
steps:
  - script: |
      docker build --build-arg port=8080 --rm --compress -t sampleaspjs/web .
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Build Docker Image

We named the yaml file as .vsts-ci.yml. Commit it and push it out to our git repository.

Now that the yaml file is in our code repo, lets go and set up the CI build.
  1. From the Build Definitions page, click on the "+ New" button to create a build definition.
  2. Select the source to VSTS Git, select your Team Project, repository and the default branch of master 
  3. From the list of template, select option YAML in the Configuration as Code classification and click Apply
  4. In the build definition, type in the build name, select the "Hosted Linux Preview" queue and select the YAML path. Make sure Continuous Integration is enabled. Save the build definition.

Now that we have the CI build, lets turn out attention to create a deployment deployment build

Setting up the product build


The steps to create our product build is similar to that of CI build except that we will have a different YAML file. Our build process will go one step further. In addition to creating the docker image, it would also push out docker images to dockerhub

Our YAML file looks like following

name: $(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.rr)
steps:
  - script: |
      docker build --build-arg port=8080 --rm --compress -t tiresias/web .
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Build Docker Image

  - script: |
        docker tag aspnetvuejs/web tiresias/web:$(Build.BuildNumber)
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Tag docker image with buil d version

  - script: |
        docker login --username your-username --password your-password
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Docker Login

  - script: |
        docker push aspnetvuejs/web
    workingDirectory: $(Build.Repository.LocalPath)/web
    displayName: Docker Push

We named the yaml file as .vsts-build.yml. Commit it and push it out to our git repository.

The process to set up the deployment build is same as above.
  1. From the Build Definitions page, click on the "+ New" button to create a build definition.
  2. Select the source to VSTS Git, select your Team Project, repository and the default branch of master 
  3. From the list of template, select option YAML in the Configuration as Code classification and click Apply
  4. In the build definition, type in the build name, select the "Hosted Linux Preview" queue and select the YAML path. Make sure Continuous Integration is enabled. Save the build definition.
With the product build in place, we have a mechanism of creating docker images and pushing it out to a container registry.

Next steps

In this post, I described the steps to create a CI / build process for verifying, creating and distributing docker images. In my next post, I will deploy the docker images to a Kubernetes cluster created on Azure Kubernetes Service.


Thursday, 29 March 2018

VSTS / TFS 2018 Viewing test run history for a given test case

UPDATE:
Divya Vaishyani from Visual Studio Team Services team has rightly pointed out that it is possible to view test result history for a test case using the Test results pane as documented here. However, the test results from across Test Plans. This is quite confusing and different than the test history that showed. The workaround in this post allows you to view history for the same test plan


With the release of TFS 2018, running automated tests from Microsoft Test Manager (MTM) isn't supported any more. (see TFS 2018 release notes) doesn't support MTM to run automated tests from. This was announced in VSTS and TFS road map about two years ago. 

The test planning and management features in TFS / VSTS are pretty cool. However, there is one feature that I feel is rather missing and that is ability to view the history of a particular test case. In MTM, you could just click on the "View Results" link on a Test Case and view previous results. However, in VSTS, it is not possible to view test case's run history. There is a feature request in the user voice for it. Do remember to add your vote for it!!

1) In TFS, click on "Test" from the top menu and select the test suite where your test case is. Select the test case that you are interested in. Then Click pass or fail button. This will generate a manual test run for the given test.

Trigger a Manual Run for your Test Case

2) Go to MTM --> Test --> Analyze Test Run. Select option "Manual Runs" in the View option
The good news is that there is a workaround - using MTM - for you to view history of test runs for a particular test case.


Find the Manaul Test Run in MTM

3) Open the test run. Right click on test and click "View Results".

View Test Run in MTM

4) The list of results will show you the manual run as well as automated runs, which is what you are really looking for.

View Test Results

It's still a workaround and you still need MTM but you can see the history of test cases this way. I hope you find this post useful.







Saturday, 2 December 2017

TFS 2017 Build - Partially succeed a build

At times, there is a need to explicitly set a Team build's result to be "Partially Successful". 

In Xaml, the way to forcibly build to set as partially successful is to set the build's "CompilationStatus" property to true and "TestStatus" to False, as shown below

<mtbwa:SetBuildProperties DisplayName="Set TestStatus to Failed so we get a PartiallySucceeded build" PropertiesToSet="TestStatus" TestStatus="[Microsoft.TeamFoundation.Build.Client.BuildPhaseStatus.Failed]" />

Setting a TFS 2017 build to partially succeed is a bit more intuitive. Simply add a powershell task with an inline script and set the task's result to "SucceededWithIssues". Make sure it's the last task in your build, so that it doesn't affect the flow of task execution. The Powershell statement is shown below

Write-Host "##vso[task.complete result=SucceededWithIssues;]DONE"

My build looks as follow


Thursday, 30 November 2017

TFS 2017 Build System - Maintain last "N' builds

In my last blog post, I described retention policies in the TFS 2017 build system. I described how different it is from the the retention policies we get in XAML build system. 

One of the limitations I found in the new style retention policy is that I couldn't retain a specific number of builds for each status. We needed to do it for some builds that are triggered very frequently (once every couple of minutes) and check if there is some work to be done. If it found work, it would do it, otherwise it will reschedule a build for itself after a couple of minutes. Another scenario, where you might have a lot of builds is when you it is triggered by a commit of a very busy repository.

So, in order for us to retain only "N" builds for each status, we created a Powershell Module to clean up builds. In the module, we create a command-let that takes as parameter the name of the build, the number of builds to keep, the result filter and tag filter. Our command-let looks as following



***************************************************
.SYNOPSIS
 Cleans up all builds for the given build definition keeping the latest N number of builds where n is passed a parameter
 If a status is provided, it would only keep N builds with the given status

.DESCRIPTION
 PATCH https://{instance}/DefaultCollection/{project}/_apis/build/builds/{buildId}?api-version={version}
 Uses api-version 2.0 to update the build result
***************************************************
function Cleanup-Builds([string] $tfsCollection,
                    [string] $tfsProject,
                    [string] $buildDefinitionName,
                    [int] $numberOfBuildsToKeep = 10,
                    [string] $result="",
                    [string] $tagsFilter = "")
{
    if (${env:system.debug} -eq $true) {
        $VerbosePreference="Continue"
    }

    if ($status -eq ""){
        Write-Verbose "Deleting all but the latest $numberOfBuildsToKeep builds for definition $buildDefinitionName."
    }
    else{
        Write-Verbose "Deleting all but the latest $numberOfBuildsToKeep builds for definition $buildDefinitionName with status $status."
    }

    $buildDefinition = Find-BuildDefinition($buildDefinitionName)
    if ($buildDefinition -eq $null) {
        Write-Error "No build definition found $buildDefinitionName"
        return
    }

    $buildDefinitionId = $buildDefinition.id
    $query = [uri]::EscapeUriString("$tfsCollection$tfsProject/_apis/build/builds?api-version=2.0&definitions=$buildDefinitionId&queryOrder=2&resultFilter=$result&tagFilters=$tagsFilter&`$top=5000")

    $builds = Invoke-RestMethod -Method GET -UseDefaultCredentials -ContentType "application/json" -Uri $query
    $retainedBuild = 0
    $deletedBuildCount = 0
    for ($i = $builds.Count - 1; $i -gt -1; $i--) {
        $build = $builds.value[$i]
        $buildId = $build.id
        $buildNumber = $build.buildNumber
        
        try {
            $query = [uri]::EscapeUriString("$tfsCollection$tfsProject/_apis/build/builds/$buildId/tags?api-version=2.0")
            $tagFound = $false

            # Not delete the latest numberOfBuildsToKeep builds
            if ( ($retainedBuild -lt $numberOfBuildsToKeep)) {
                $retainedBuild = $retainedBuild + 1
            }
            else {
                Write-Verbose "Deleting build $buildNumber"
                $query = [uri]::EscapeUriString("$tfsCollection$tfsProject/_apis/build/builds/$buildId`?api-version=2.0")
                Invoke-RestMethod -Method DELETE -UseDefaultCredentials -ContentType "application/json" -Uri $query
                $deletedBuildCount = $deletedBuildCount + 1
            }
        }
        catch {
            Write-Error "StatusCode:" + $_.Exception.Response.StatusCode.value__ +
                        "`r`nStatusDescription:" + $_.Exception.Response.StatusDescription
        }
    }
        
    Write-Output "Deleted $deletedBuildCount builds for build definition $buildDefinitionName"
}

We create a PowerShell Module file for the above command let. To set up the Powershell module, we modified the PSModulePath environment variable as first step of our build to include the module path. Then to set-it all up we added a PowerShell task group calling the Cleanup-Builds command in an inline script as shown below




Our build definition looks like below





Friday, 10 November 2017

Retention Policies for TFS 2017 Build System

TFS build system has had a major overhaul since TFS 2015. For people working on team builds since TFS 2010, there is some major learning curve. One of the things the people often find confusing is the retention policy in the new build system. In earlier versions of TFS, you could specify how many builds you want to retain for each status as shown in the screenshot below


Retention Policies for Xaml Builds

The retention policy is quite obvious and you have a deterministic number of builds retained at each status. It's not quite the case in the new build system. A sample retention policy in the new system looks like following


Retention Policies for TFS Builds


So what does it mean? 

Well, to say it simply, it means exactly what it says on the tin!! In the example above, the build would keep all builds from the last 4 days and keep a minimum of 20 builds. That is if there are less than 20 builds present for the given build definition, it would keep older builds until there are a minimum of 20 builds. Lets ignore the options with the lock sign, we will come back to it later. Note there is are no maximum count. It means that you can't control how many builds you keep for your build definition. This is a major shift from earlier retention policy where the number of builds kept for a build definition was deterministic. 


When are builds actually cleaned up?

If you are using an on-premise version of TFS (I am using TFS 2017 Update 2), the builds are actually not cleaned until 3:00 AM every day. For VSTS, it happens several times in a day but the time is not deterministic. The actually explains why there is only a "Minimum to Keep" option in the retention policy.

If you have a build definitions that is triggered very frequently, you will need to find a solution of actually deleting the build definitions. I will explain it in the next post.


What about Keep For 356 days, 100 good builds?

This is the option you see below your policy in the screenshot shown above. This is in fact a TFS Project Collection wide policy and enforces the maximum retention policy. So, in the example above, you can't set "Days to Keep" to more than 365 and "Minimum to Keep" to more than 100. In fact, if you have appropriate permissions, you can change it for the entire Team Project Collection.



TFS Project Collection Retention Policy


Multiple Policies

If you want, you can add retention multiple policies for your build definition. It is very useful, if you have build definition that builds different code branches (release branches for instance). You can use the retention policies to keep different number of builds from each branch. 


Multiple Policies


If you have multiple retention policies for the same branch, the retention would be the most lenient of all the retention, so whatever retains the most builds.

In the next blog post, I will show how we are keeping a lid on the number of builds for builds which are build very frequently, every couple of minutes in our case.