Skip to content
Colin Wren
Twitter

Continuous Delivery with Travis, DockerHub, ECS and AWS CLI

DevOps, Automation, Software Development4 min read

delivery driver
Photo by Kai Pilger on Unsplash

As part of the work I was doing around validating my ‘Snapdex’ idea with a Discord bot I built a CD pipeline to ensure I could deploy new functionality quickly while ensuring quality metrics were being met.

To achieve this I used Travis CI (my go to CI for my projects), Docker Hub (so others could download the image (it’s open source after all), Amazon’s Elastic Container Service (ECS) and a script that utilised the aws CLI tool for working with Amazon’s cloud offering.

I was working with another developer who is just learning the ropes so I wanted to show them how important shipping your code is as well as the importance of making the process as painless as possible (so you don’t worry about it, thus preventing you from pushing out new changes).

Release process

We’re practicing git flow so development of a feature involves the following:

  • Feature development is done on a feature branch with Travis CI providing test running and static code analysis
  • After code review as well as the test and code analysis the feature branch is merged into thedevelop branch
  • When develop has enough changes for a release a release branch is created
  • The release branch is merged into master
  • A git tag is created from the master branch with the version number
  • Release notes for the new version are added to the Github release page

On creation of the tag Travis CI will run a deployment script which does the following:

  • Logs into Docker Hub
  • Builds and tags the Docker image with the same version as the git tag
  • Pushes the Docker image to Docker Hub
  • Runs deployment script for ECS that creates a new task definition with the new Docker image and deploys the new version

I think this process works really well as it provides feedback on all parts of the process so any regressions are caught by the tests and the release is only triggered on the git tag which requires a bit more effort than merging into master which could be done by accident.

The automatic pushing of the new Docker image to Docker hub means that even if the deploy fails due to issues with the AWS configuration other users can access the new Docker image for their deployments.

I had originally toyed with the idea of adding a webhook from Docker hub’s automated build system to fire off the ECS deployment but I felt that it was a little to asynchronous where as doing it all in one bash script meant I could catch and handle any errors before the deployment (for instance I could use ServerSpec to test the Docker image).

The ECS deployment script I’m using can be found at https://github.com/silinternational/ecs-deploy although I decided to manually update my version as if anyone changed the script on the remote repo to steal AWS credentials that’d be pretty catastrophic for my bank balance.

Travis CI

I like Travis CI as I’ve found it easy to configure, it’s lightweight, integrates with Github well and it’s free for open source projects.

Travis uses a .travis.yml file in the root directory of the project to detect the configuration for the project and run various jobs.

These jobs can be defined to be run using conditions such as the branch name or a matrix of environment parameters, which makes running multiple permutations of environments and scripts easy.

One of the best things that Travis offers is the deploy stage options, this a definition of how to deploy your code to different sources using in-built recipes such as S3 or via a script. For my Snapdex project I used a bash script to do this.

The below gist shows how easy a CD pipeline can be achieved with Travis, the different steps are:

  • Addons —This allows for any supported 3rd party services to be set up, for instance SonarQube cloud can be used by using the sonarcloud addon
  • Services — As we’re using Docker to build our images we need the Docker service
  • Language — This is where you define the main language your code will be using, you can then define the versions of the language you want to run your code against (for instance under python you could define different Python versions)
  • Install — This is where you install any dependencies needed by your project, we run pip install here for instance
  • Script — Where the main magic happens, the commands you add under script will be executed in order on every push (unless you use on to control when they are run).
  • Deploy — This section is how to deploy the code or artefact created from the script stage, this deploy runs a bash script in the scripts directory and only does so when the build is triggered by a git tag being pushed to.
1addons:
2 sonarcloud:
3 organization: $SONARQUBE_ORG_NAME
4 token:
5 secure: $SONARQUBE_TOKEN
6services:
7 - docker
8language: python
9python:
10 - 3.6
11install:
12 - pip install -r requirements.txt
13script:
14 - nosetests
15 - sonar-scanner
16deploy:
17 provider: script
18 script: bash scripts/deploy.sh
19 on:
20 tags: true
Travis CI configuration for CD

ECS and AWS CLI

As mentioned above the deployment script is updating the Docker image in the ECS task definition but unless you’ve worked with ECS before that doesn’t mean much.

A task definition is somewhat similar to how a service is defined in a Docker Compose file, you define the configuration for the service such as the entry point, command to run and the networking but it’s a bit more advanced in the sense you also define the CPU and memory requirements.

Task definitions can be created and edited in the AWS console using the wizards and forms or they can edited via their JSON representation. The deployment script I use downloads the JSON, updates it and uploads it as a new version.

You can read more about task definitions on the AWS docs

Once you’ve got a task definition you then use that task definition in a service which is essentially the running instance of your application. You can configure autoscaling on the service level (although I’ve yet to do this).

The service is then put into cluster which defines how many desired instances of your service(s) you want running and it makes sure that many are available.

For my Snapdex bot I only need the one running service so as soon as the task definition is updated ECS will bring up a new version of the service and once it’s running stop and remove the older version.

If you’re new to ECS and would like to learn how to set up your own cluster I suggest you follow the instructions on AWS’s How to Deploy Docker Containers page as I found it really helpful.

Summary

I think the CD pipeline works really well, the release process gives me full control over what gets released and I get feedback from the scripts used to deploy the new version if anything fails.

My next step is to have my new budding developer release one of their features using it and hopefully see the panic go from their face when they realise how painful the deployment is.