Skip to content
Colin Wren
Twitter

My journey with Docker

DevOps5 min read

This week at work I attended a ‘lunch and learn’ talk at BJSS (short 45 minute talks where lunch is provided for those attending) about Docker for testers and it got me thinking of my own experience using Docker as part of my toolset.

whale
Photo by Robyn Carmel on Unsplash

Getting to know Docker

My first experience with Docker was when I was still a developer at NeovaHealth. We’d just moved to baking our apps, operating environment into Vagrant images so we could put these into AWS as EC2 instances.

My managing director wasn’t too sure on containerisation as this was too close to the host’s operating system for his liking, so we didn’t touch it and continued building our Vagrant images which lead to a number of issues related to developers not practising configuration as code.

It wasn’t until I joined BJSS and we had some time with a System Engineer that I had my mind opened to the benefits containerisation offers and the practices that make it really powerful.

Artefacts are key

One of the first things you come to understand as you work with Docker is that you can’t just SSH into the box and make changes then ship it to the customer — you need a solid release process to ensure that things work the first time round but also so it’s flexible to fix minor bugs before deployment.

We found that the GitFlow process for Git gave us the flexibility we needed to handle the small bug fixes that may have been needed before a deployment could happen.

Ultimately the Docker image is the end artefact, it’s a compiled version of your app’s environment and app in one binary format but there a number of additional artefacts such as inherited Docker images, libraries and application code that go into making that final artefact.

At the start of my journey this just involved creating separate Docker images for the OS libraries and the Python libraries the app I was working on at the time was using to run, to isolate the slower moving aspects of the running environment from the faster moving ones, but as things progressed we started to version and create artefacts for our code and built a release process that built quality into the app.

While working towards a version might be a common practice for those working on libraries and code that other teams are consuming it was something that we had to learn as a team of 3 deploying code directly onto a customers servers.

A great way to instil this process is to use an artefact repository such as PyPi or Nexus as part of the build process so that the final Docker image is created using the artefact downloaded from the repository, this gives greater traceability for all aspects of the app running in the Docker image.

Another benefit to creating artefacts is that there’s a clear separation of concerns for the code — if something doesn’t seem right to implement into that artefact (for instance adding behaviour to handle user management into a library that handles low level business logic) then it’s easier to argue the case against adding it.

Fan-In / Fan Out & Environment Variables

One of the benefits that Docker gave us immediately when we started using it was the ability in our Continuous Integration environment to spin up multiple instances quickly to run different tests against the app before collecting all the test results and continuing our pipeline.

This technique is called Fan-In / Fan-Out and relates to running multiple configurations of an app / process during a stage of the pipeline in parallel before going back to a single process.

To enable this technique you need to make your app and your Docker image configurable via environment variables which can be provided via the --env flag when using docker run or provided as part of the Docker compose file.

An example of this would be an app that connects to a web service to download a configuration file.

Instead of hard-coding the web service address into the code or some configuration file you can instead use os.env to get the value from the environment variables of the running container, which can be one of many containers running in a matrix of configuration options (such as dev, test, prod configurations).

Similar to the clean code principle of no magic numbers it’s a good idea to identify any part of your app that can change based on the environment and allow these to be set by environment variables, as this means you only have to create one Docker image that can be run in all environments.

Docker Compose

Docker Compose is a god send. It replaced some overly complex Ansible playbooks we used previously with a simple YAML file that described the containers needed to run the service we offered our customers.

Each service can be linked to another, set to only be run when another service it depends upon is up and running and you can combine multiple compose files into one docker-compose command to bring up multiple stacks together.

Poly-filling missing Docker Compose functionality with Make

One of the limitations of Docker Compose is that it’s depends functionality only waits for the dependency to run, it doesn’t offer the means to wait for a web service to be responding to HTTP requests for instance.

In order to handle this need I found that breaking the dependencies into separate YAML files then using a Makefile to run docker-compose up for the upstream apps then run a script to ensure they’re up and running (such as a bash script that runs a while curl [web service address] ) before continuing to run docker-compose up on the downstream apps.

An example project that uses this technique can be found in my Testing Kafka with Jest repository on Github.

Running things locally

The main benefit Docker Compose gives the team (not just developers!) is that the app(s) can be run locally within the context of the service that is intended to be delivered to the customer.

This means that a tester can test a feature branch locally before approving a pull request or a product owner can run the entire system on their machine while on the client site to gather feedback or generate feature ideas.

In my current engagement I run 20 containers which form the microservice based app I’m working with locally in order to develop my automated tests as this is quicker than waiting 30 minutes for a new version to be deployed to the test environment.

Kubernetes & Helm — my next big adventure

At the time of writing the client I am currently working for is moving to Kubernetes and Helm which means I’ve got some new tools and techniques to learn.

The premise of both tools looks promising and I’m especially interested in Helm and it’s ability to rollback configuration applied to the service.

Kubernetes is certainly easier to work with than Docker running on something like Mesos as the service and pod structure makes it really easy to do things like get logs and port forward from a pod.

The only thing blocking my path at the moment is setting up our existing 20 microservice stack in minikube using helm as there’s a lot of values in the client’s charts that don’t resolve locally — not a problem with Helm itself but the client set up.

I’m hoping to master this soon so I can get things running but also it’s another thing to blog about!