Running Tests in restricted environments using Jest, Restify.js and Docker
— Testing, JavaScript, DevOps, Software Development — 4 min read
At work recently I found myself with an interesting problem.
I’d been using a local Docker stack to write end-to-end tests for a bunch of services the development team had been building which contained multiple Kafka queues and Riak buckets.
This worked fine locally but when it came to moving everything to the production-like environment my tests were unable to connect to Kafka or Riak as these were not exposed outside of the private network set up within the deployed stack.
In order to get my tests running I needed to deploy them into the same infrastructure where they’d have access to everything but I still needed a means to kick off test runs and view the results outside of the infrastructure.
Containerising the tests
Luckily creating a Docker container is quick and easy thanks to the Dockerfile syntax being simple and very powerful.
The main problems to solve are instead:
- How will the tests be run in different deployments without changing the code?
- How will the tests be run?
- How will the logs from the test run be collected?
- How will the test results be made available?
- How will different versions of the tests be managed?
How will the tests be run in different deployments?
I created a config file which would pick up the values that would change between deployments (such as URLs, port numbers etc) from environment variables either defined in a Docker Compose file or using envconsul
.
I wrote the following function to grab these, while allowing a safe default to be set:
How will the tests be run?
The deployments in the infrastructure I’m dealing with are handled with Mesos which will keep a certain number of instances running at all times. This meant that having the container’s command just be npm test
would result in Mesos starting another instance as soon as the tests finished.
In order to bypass both the Mesos situation (I’ve been told there is a means to bypass this behaviour) and to allow for the tests to be run at any time via a REST call, I decided to wrap the tests in a HTTP server, using restify.js that would run the tests on receiving a POST request.
In order to run the tests I first looked at using Jest as a module that I imported into my server using the runCli
function from the jest-cli
module.
This worked well to start with as I could use async/await
to wait for the tests to run and then send the test results back as a JSON object, this didn’t scale up well, however, due to longer running tests resulting in a connection timeout.
Another issue with using Jest as a module is you don’t get all the configuration options such as runInBand
, forceExit
and detectOpenHandles
which I needed due to having to use long timeouts on one test.
I decided in the end to just run the tests as a subprocess, this has the benefit of being able to run the exact same command being run locally (such as npm test
) and allows for the stdout and stderr to be used if need be.
Here’s a snippet of the controller code:
How will the logs be collected?
We’re using Splunk in our infrastructure so unfortunately this wasn’t a decision I had much involvement in, instead I just added the relevant code where it needed to go and I was able to see logs popup.
If you’re using Splunk here’s a few searches I found useful:
index="name of index" | table line
— This will return just the log lines so you can see (albeit in reverse) the logs as they would appear in the console
index="name of index" *Tests:*
— This will get you the pass / fail stats for the test run
How will the test results be available?
As I settled on using a HTTP server to run the tests I decided to add a controller that would read the test results from a database (stored as part of the test run).
This means that no matter how many times the container is restarted, redeployed or the tests are run every test run is recorded and can be retrieved.
How will different versions of the tests be managed?
One of the great things about Docker is that you’re building an artefact so versioning (as long as you don’t overwrite versions) is built into the process.
As the development team were working in semantic versioning I do too and I created a table in our Wiki that shows the different versions of containers for the different releases we are working towards.
I applied this principle to my codebase too, using a local npm registry to publish my packages to, then using a particular version in the different versions of the docker container so I could trace which docker image ran which version of the test suite.
When we needed to check compatibility between newer versions of the different components (such as moving from Kafka 0.11 to 2.1) I was able to use these versions to recreate the exact same setup and verify if the outcomes changed.
Running the tests and getting results
From code to results the process is:
- Publish the tests as a npm package
- Build a docker image with that version of the package in it (this is part of an automated build in our infrastructure)
- Deploy the docker image into the development deployment (this is also fairly automated in our infrastructure)
- SSH onto the jump host for the development deployment and issue a POST to the
/test-runs
endpoint via cURL which returns a test-run ID - Poll
/test-runs/[id of test run]
for the results using a bash until loop
- Visiting
/test-runs
returns all test run data
Summary
There are a number of benefits to containerising the tests;
- It helps to provide a test artefact that exists along side the code base
- It’s easier to test against different environmental configurations both locally and in different deployment infrastructure
- The test suite can be triggered as a post deployment step to provide CI within locked down environments
I’ve created a demo repo on my Github that contains some of the tools and techniques I used but on a smaller scale. Feel free to clone it and play around with it and leave a comment if you found it useful!