Skip to content
Colin Wren

Configuration testing your Docker containers with TestInfra

DevOps, Testing, Automation, Software Development4 min read

Photo by Nick Karvounis on Unsplash

One of the principles of a continuous delivery pipeline is to use the same compiled artefact that is going to be deployed throughout the pipeline.

The reason behind this is that there’s a risk that if the artefact is tested and then rebuilt or changed, then the new artefact will be different to one that’s been proven to work and thus, could fail or work in unverified ways.

An example of this would be to take a NodeJS app that’s tested on one CD worker and then deployed on a separate CD worker using the following steps:

  • CD worker one runs npm install to install libraries, let’s say usefulLib@2.0.0 is installed in the process
  • CD worker one runs npm run test and the tests all pass — Yay the app is working!
  • CD worker one passes it’s stage and CI worker two picks up the next stage
  • CD worker two doesn’t have the node_modules CD worker one had installed so it runs npm install again but now pulls usefulLib@2.0.1 as an update was pushed and the dependency wasn’t pinned

The fun happens when it turns out usefulLib@2.0.1 introduces a new bug and this causes a failure in the app when it runs in production.

Lots of time is lost trying to trace down what went wrong and the test stage results are all green but on re-running the pipeline they start to fail, so then the test’s reliability are questioned and the team don’t have as much faith in the value they bring.

These types of failures become particularly hard to track down when it’s not a direct dependency that isn’t pinned.

In my current project we use a Nexus instance to mirror NPM and there’s been a number of times that it seemed like every hour one of the dependencies that Jest used was having a patch release pushed. As they didn’t use explicit versions it would try to bring in a new version of that library, only for it to not exist, being that our Nexus repo hadn’t updated with that version yet.

Why configuration test?

One of the most common means of creating an artefact is to create a Docker container.

Docker containers not only allow you to have your code as an artefact but they allow you to have the entire environment packaged into it, greatly increasing the reliability of the testing carried out against the application.

A standard method of deploying an application with Docker is to find a base image that can handle the language being used, copying the files over into the container and then using an entry point that runs the application.

While the majority of the time it’s easy enough to catch issues with the container during the build phase by looking at the logs, it’s always better to have checks in place that can halt the pipeline if the container isn’t created correctly.

There are a few tools out there to help with this. They provide additional functionality to test frameworks to verify:

  • Packages are installed and the correct version is installed
  • Files exist, contain the correct values and have the correct permissions
  • Services exist, are in the correct state (running, stopped) and can only be run by the correct user groups
  • Ports are opened and applications are listening on them

These frameworks aren’t limited to just Docker and will generally support testing the host machine, virtual machines and Docker images.

In my previous team we used ServerSpec, a RSpec based configuration testing tool which helped us verify the different levels (OS, application and client configuration) of our Dockerised app were set up correctly.

However as the project I’m currently working on in my spare time is using Python I decided to use TestInfra, which adds configuration testing functionality to PyTest.

Using TestInfra with Docker

TestInfra supports a number of host types and platforms including Docker but the only documentation of this is an example test case in the Examples section of the project’s docs.

In order to test Docker containers with TestInfra a PyTest fixture needs to be set up. This fixture will run the a Docker container and set up the protocol for interacting with that running instance.

The example in the TestInfra documentation is pretty basic but you can use it with Python's unittest library in order to access better assertions and test lifecycle functionality.

Once you’ve got access to the Docker container in your test you can then use a number of TestInfra’s modules to check files, packages, sockets and more in your tests.

An example test suite — Checking Python runtime

In my current hobby project I am building a Discord bot that requires Python 3.7 to run as well as two libraries — and

In order to access the PyTest fixture used for setting up TestInfra from within a unittest.TestCase subclass you need to create a file.

1import pytest
2import subprocess
3import os
4import testinfra
6DOCKER_IMAGE_NAME = 'gimpneek/snapdex'
8# scope='session' uses the same container for all the tests;
9# scope='function' uses a new container per test function.
11def host(request):
12 # run a container
13 docker_id = subprocess.check_output(
14 [
15 'docker',
16 'run',
17 '-d',
19 ]
20 ).decode().strip()
21 # return a testinfra connection to the container
22 host = testinfra.get_host("docker://" + docker_id)
23 = host
24 yield host
25 # at the end of the test suite, destroy the container
26 subprocess.check_call(['docker', 'rm', '-f', docker_id])
You can replace DOCKER_IMAGE_NAME with the image you want to test

You then access the fixture by using the pytest.mark.usefixtures decorator on your TestCase subclass which will the add host as a class variable (accessible via )

1from unittest import TestCase
2import pytest
6class TestRequirements(TestCase):
7 """
8 Check the requirements for running the bot are set up in the Docker image
9 correctly
10 """
12 def setUp(self):
13 super(TestRequirements, self).setUp()
14 self.requirements ='/src/requirements.txt')
15 self.requirements_list = self.requirements.content_string
16 self.pip_packages =
18 def test_requirements_exists(self):
19 """
20 Check that the requirements.txt file exists
21 """
22 self.assertTrue(self.requirements.exists)
24 def test_discord_in_reqs(self):
25 """
26 Check that the library exists in the requirements file
27 """
28 self.assertIn(
29 'git+'
30 '[voice]',
31 self.requirements_list
32 )
34 def test_pokedex_py_in_reqs(self):
35 """
36 Check that the library exists in the requirements file
37 """
38 self.assertIn('', self.requirements_list)
40 def test_discord_installed(self):
41 """
42 Check that the library is installed in the python env
44 Note: Check for 1.0.0 as this is what the rewrite version of
45 reports itself as
46 """
47 self.assertIn(
48 '1.0.0',
49 self.pip_packages.get('').get('version')
50 )
52 def test_pokedex_installed(self):
53 """
54 Check that the library is installed in the python env
55 """
56 self.assertEqual(
57 '1.1.2',
58 self.pip_packages.get('').get('version')
59 )
61 def test_python_3_7(self):
62 """
63 Check the Python 3.7 is used when running the Python command
64 """
65 python_version ='python --version').stdout
66 self.assertIn('3.7', python_version)
Example tests making use of the file, pip_package and command modules

In the example test cases above I’m checking that:

  • The requirements.txt file was copied into the image correctly
  • The requirements.txt file has both libraries needed to run the bot in it
  • The pip instance used in the container is returning the correct versions of both libraries to verify they were installed correctly
  • The correct version of Python is run as the version I am using requires Python 3.7 and my Docker entry point uses the python command

A side note on the Python version check is to be careful when using the package module TestInfra provides, as this will use the underlying OS’s package manager to report the version installed.

I’m using the python:3.7 Docker image as my base which uses dpkg for package management and the result I got from running'python') was 3.5, although the python command was a symlink to Python 3.7.

Integrating configuration testing into your CD pipeline

Configuration testing checks that the Docker container (or environment the application will be deployed to if not using containerisation) is configured correctly and as such should be used as exit criteria for that phase in your CD pipeline.

I’m using Travis to build and deploy my bot’s Docker container to ECS. In order to make things easier for myself I’ve created a Makefile that has four stages:

  • Build the Docker image
  • Test the built Docker image
  • Publish the Docker image to DockerHub
  • Update my ECS task-definition with the new Docker image version and deploy

I then use two stages that combine the build and test (to be run as part of my CI) and the publish and deploy (to be run on a version release).

The end result is a really clean .travis.yml file that is easy to understand.

1dist: xenial
3 - docker
4language: python
6 - 3.7
8 - make install_bot_deps
10 - make ci_test
11 - make docker
13 - make install_aws
14 - make install_ecs_deps
16 provider: script
17 script: make deploy
18 on:
19 tags: true
Really clean Travis CI configuration


If you’re building Docker images for your deployments and you’re not running checks against the structure of the compiled Docker image then TestInfra (or ServerSpec) can help save you hours of debugging configuration issues.

Adding the checks into your CI/CD pipeline isn’t hard especially when you something like a Makefile to abstract the building and testing phases into one stage.