This repo includes the source code and the github workflow file to generate the docker hub image for the web application. The kubernetes config files repo for this source code are available in the link below:
Config File repo »
Table of Contents
In this project, a simple profile page with a MongoDB
database is used to understand the principles of Gitops
with Argo CD
. The webapp is dockerized, deployed to a Minikube
Kubernetes cluster using Argo CD
, and its release process is managed with Argo Rollouts
. A Canary
release strategy is implemented using a new docker image of the webapp and its results are show in the following sections. Further, the Kubernetes
Config files are stored in a different github repo for easing the complexity of the CD pipeline and it is a good practice according to the Argo CD
docs.
So, this project consists of three main tasks: setup and configuration, creating the GitOps pipeline and finally implementing a Canary release.
The below points describe the key installations that is needed to run this project:
- The installation of
Kubernetes
or 'kubectl' andDocker
can be followed from their official websites links given in the above section. - For installing the
Minikube
cluster on my local machine, I have followed the steps from the official documentation. In this project, I have used the cluster as a container, so the driver I am using to start the cluster isDocker
. - Next in the
Minikube
cluster, Argo CD is installed following the official documentation. - Since one of the aims of this project is to include a canary release strategy, Argo Rollouts is also installed in the cluster following the official documentation. I have also installed the Kubectl plugin, so that it is possible to visualize and manage the plugins from the command line. In the next section, the steps involved to create this project is defined.
-
The initial step involves pushing a docker container image of the web application to
DockerHub
. So, the docker files -Dockerfile
anddocker-compose.yaml
is created. Further, in order to streamline the process a github actions workflow file -main.yaml
is created to push a new image toDockerHub
everytime there is change to this source code repo. -
Next in the
Minikube
cluster I have initially created, a new namespacemyapp
is created for the web application and the config files will be run inside this. The Kubernetics manifest files in the Config repo uses the image -doomnova/webapp-argocd:latest
, which was prevoiusly pushed to theDockerhub
. This image will be know as the initial web app container image in this project. -
Since I am using a
MongoDB
database for the webapp, the Config files in the repo-mongo.yaml
,mongo-secret.yaml
andmongo-config.yaml
are used to define database in the cluster. Thewebappdeployment.yaml
is used to deploy the user profile page app developed in this project. For the web application, there are tworeplicas:2
defined in this project. Further, in order to access the app in the browser, theNodePort
service is defined in thewebappdeployment.yaml
. Note that inorder to use the file for this task, uncomment code for the kubernetes resource typeDeployment
and comment the kubernetes resource typeRollout
in the latestwebappdeployment.yaml
file in the Config repo. -
Finally, the
application.yaml
is used to defined the entire application in theMinikube
cluster. In this file, we can specify the Config file repo that should be watched byArgo CD
to makes changes to the application in the cluster.
Now to deploy the web application using the initial image, the below code can run in the terminal:
kubectl -f apply application.yaml
we can see the deployed webapp user profile page below.
Further as shown in below figure, using the Argo CD UI we can see the deployed application with its different components.
In this section, we will look at the steps taken to change the current web application to include a canary release rollout strategy using Argo CD Rollouts.
- Most of the files are similar to the previous task. Only difference is in the config file
webappdeployment.yaml
, in which a kubernetesRollout
resource type is used to control the canary release. The rollout will be triggered by updating the web app image fromdoomnova/webapp-argocd:latest
todoomnova/webapp-argocd:v1
. In the new image, the profile photo has been changed. This image will be known as the canary image in this project. - The rollout strategy is based on the one defined in the official docs of canary release. The rollout here utilizes a canary update strategy which sends 20% of the traffic to the canary. Then followed by a manual promotion and finally gradual automated traffic increases for the rest of the upgrade. The canary strategy is given below:
replicas: 5
strategy:
canary:
steps:
- setWeight: 20
- pause: {}
- setWeight: 40
- pause: {duration: 10}
- setWeight: 60
- pause: {duration: 10}
- setWeight: 80
- pause: {duration: 10}
Similar to the first task without rollout, we can run the following command in the terminal to deploy the web app:
kubectl -f apply application.yaml
The below figure shows the Argo rollouts UI for the initial image when the application is deployed.
Also, the below image shows the initial image rollout in the command line interface. The rollout immediately scaled up the replicas to 100% since there was no upgrade that occured. We can see the five replicas defined in the rollout strategy with the initial web app image.
Now, the rollout can be triggered by using the below command to change the image of the webapp:
kubectl argo rollouts set image canary-rollout \
canary-rollout=doomnova/webapp-argocd:v1
If we look at the rollout UI below, we can see that the canary release has paused for weight of 20% as written in the strategy. The terminal in the next image also supports this and we can see both the web app images listed as stable
and canary
Now inorder to promote this canary strategy to the other replicas we can run the below command:
kubectl argo rollouts promote canary-rollout
The below figure show the traffic at 60% in the argo rollouts UI. we can see the canary release is being deployed to the other remaining replicas.
Finally, in the below rollouts user interface we can see the canary image has been deployed to all the other replicas. We can see that in the terminal, the new image is deployed.
The below webpage shows the new updated canary release of the webapp with a different user profile picture.
In order to cleanly remove all the resources created for this project from the kubernetes cluster, the below commands can be used:
kubectl delete all --all -n myapp
Also to the delete the entire namespace with all its resources :
kubectl delete namespace myapp
Distributed under the MIT License. See LICENSE.txt
for more information.