Manage cookies
We use cookies to provide the best site experience.
Manage cookies
Cookie Settings
Cookies necessary for the correct operation of the site are always enabled.
Other cookies are configurable.
Essential cookies
Always On. These cookies are essential so that you can use the website and use its functions. They cannot be turned off. They're set in response to requests made by you, such as setting your privacy preferences, logging in or filling in forms.
Analytics cookies
Disabled
These cookies collect information to help us understand how our Websites are being used or how effective our marketing campaigns are, or to help us customise our Websites for you. See a list of the analytics cookies we use here.
Advertising cookies
Disabled
These cookies provide advertising companies with information about your online activity to help them deliver more relevant online advertising to you or to limit how many times you see an ad. This information may be shared with other advertising companies. See a list of the advertising cookies we use here.
Blog article
Easy Continuous Deployment on AWS
Introduction

Recently we started working on a new web based project for one of our customers. The current architecture consists of 2 containers: one containing a REST API, the second containing the SPA web application leveraging the functions of the API. Not the most complex setup, but we prefer to KISS. An extra requirement is that code, deliverables & applications should be hosted within Amazon Web Services for this customer. We used the following services for this project:

  • AWS CodeCommit as a code repository
  • AWS CodeBuild to build the code & create deliverables (= docker containers)
  • AWS Elastic Container Registry to store the containers created by CodeBuild
  • AWS EC2 Instance Virtual Machine to host the containers
  • Amazon RDS to host our DB
We are still early on in development, so we currently only have a staging environment. The staging environment is still more a development/test environment, and does not require a deluxe, high-available setup. Nevertheless, to make it easy for ourselves, we still wanted to automatically deploy the containers with the latest version of the code whenever we make a commit to the development branch. This makes it easy for the stakeholders to always see the latest version and play with it (~agile/feedback), guarantees builds are machine independent and developers do not wast time deploying code.
While AWS does have components to achieve continuous deployment with CodeBuild/CodePipeline and ECS (and blue/green deployments) or EKS (with deployment via Lambda functions), these solutions are often complex to set up and require a lot of (or at least more than desired) resources, which, of course have an impact on the cost. Especially in the deployment phase, it gets more difficult to get everything up and running. We wanted something simpler & faster, and maybe this solution might help other people as well. To give you an idea: all the containers run on one single t2.micro EC2 isntance, which is almost for free.

It goes without saying this is not a suitable setup for production, but quite handy for the staging environment.

The deployment process

The first step is easy: to build the containers, we use CodePipeline in combination with CodeBuild, and we skip the deployment step in the pipeline. It is only used for getting a trigger from the repo to launch the CodeBuild project. The CodeBuild project builds the service or application, the container, and pushes the container to the ECR and updates the tags. If your code is in CodeCommit, and container is stored in ECR, this is literally only a few clicks of work.

As mentioned before, (in our opinion) the complexity lies in the second step with the automatic deployment of the container(s) to a container host when we use standard AWS services/resources. We spend some time looking for a solution to do this "push", as is this is the most logical thing to to do. But eventually, we came up with a solution that was the opposite: a "pull" mechanism.

Watchtower

In this quest for easy automatic deployment of the latest version of docker containers, we stumbled upon a component called Watchtower: https://containrrr.dev/watchtower/

In essence, watchtower is running in docker. It's able to poll the docker repository (or event multiple repo's) of your (other installed) containers in the same docker host, at a configurable interval. When a new version of a container is detected, watchtower will stop the container, pull the latest images & restart. In other words, if we tag the container of our dev branch build, and use this tag with Watchtower, he latest version will be automatically deployed. The downtime of this is equal to the stop and the start of a container, which is a matter of one or two seconds (depending on the container).

A basic docker-compose setup for Watchtower is as simple as this:
Add this to your existing docker-compose file (or start it with the docker CLI or whatever tool you prefer), and, all your containers will be automatically monitored and updated with their latest version. Of course, if you use a specific version tag of a container, the hash will never change of the docker image, and, your container will never be updated. This only makes sense if you use a tag like "latest" or something similar, that is attached to newer builds when they become available.

In our situation, there was an extra concern as we needed to take into account our containers are residing in a private AWS Elastic Container Registry. And one does not simply pull images from an Elastic Container Registry. There is security to be taken into a account.

Watchtower & the Amazon ERC Credential Helper

Luckily, Watchtower mentions this in their own excellent documentation: https://containrrr.dev/watchtower/private-registri...

So in order to get it working, I just followed the steps defined there. TL;DR; it mentions you need to build a container running amazon-ecr-credential-helper once and have the output to be written into a mapped folder. Said folder in it's turn must then be mapped to the watchtower container.

What the documentation does not mention is that amazon-ecr-credential helper needs to be installed on the virtual machine as well. Watchtower needs this executable to be present on the host. Luckily it is easy to install on an EC2 T2.micro instance:
sudo yum install amazon-ecr-credential-helper

Final Configuration

In the end, the configuration is obviously a little more complex, but still reasonably simple:
Docker configuration (~/.docker/config.json):
The file specifies ecr-login must be used for authentication.

AWS credentials are set in ~/aws/credentials. This configuration file must at least contain the aws_access_key_id & aws_secret_access_key values.

Both configuration files & the output drive of the aws-ecr-credential-helper must be mapped into the watchtower container. docker-compose.yml now looks something like this:
Conclusion

If your looking for a cheap and simple continuous deployment solution on AWS, we think the above method is as simple as it can get. It provides:

  • A cheap solution as it works on any EC2 instance that you like. You can take a more expensive one if you need more power… cheap/expensive is relative of course
  • A simple solution, that is very simple to setup and maintain
  • A solution that has only a few seconds of downtime when deploying new code, which on a staging and test environment will almost never be noticed
Ready to execute your digital transformation?
Contact us