Skip to content
Colin Wren
Twitter

Running Tests in restricted environments using Jest, Restify.js and Docker

Testing, JavaScript, DevOps, Software Development4 min read

At work recently I found myself with an interesting problem.

I’d been using a local Docker stack to write end-to-end tests for a bunch of services the development team had been building which contained multiple Kafka queues and Riak buckets.

This worked fine locally but when it came to moving everything to the production-like environment my tests were unable to connect to Kafka or Riak as these were not exposed outside of the private network set up within the deployed stack.

In order to get my tests running I needed to deploy them into the same infrastructure where they’d have access to everything but I still needed a means to kick off test runs and view the results outside of the infrastructure.

containers
Photo by chuttersnap on Unsplash

Containerising the tests

Luckily creating a Docker container is quick and easy thanks to the Dockerfile syntax being simple and very powerful.

The main problems to solve are instead:

  • How will the tests be run in different deployments without changing the code?
  • How will the tests be run?
  • How will the logs from the test run be collected?
  • How will the test results be made available?
  • How will different versions of the tests be managed?

How will the tests be run in different deployments?

I created a config file which would pick up the values that would change between deployments (such as URLs, port numbers etc) from environment variables either defined in a Docker Compose file or using envconsul.

I wrote the following function to grab these, while allowing a safe default to be set:

1const getEnvVar = (key, safeDefault) => {
2 if (Object.prototype.hasOwnProperty.call(process.env, key) && !(typeof (process.env[key]) === 'undefined')) {
3 return process.env[key];
4 }
5 return safeDefault;
6};
7
8// Use it like
9const config = {
10 serverUrl: getEnvVar('SERVER_URL', 'http://localhost:8080')
11}
Getting environment variables or using a safe default

How will the tests be run?

The deployments in the infrastructure I’m dealing with are handled with Mesos which will keep a certain number of instances running at all times. This meant that having the container’s command just be npm test would result in Mesos starting another instance as soon as the tests finished.

In order to bypass both the Mesos situation (I’ve been told there is a means to bypass this behaviour) and to allow for the tests to be run at any time via a REST call, I decided to wrap the tests in a HTTP server, using restify.js that would run the tests on receiving a POST request.

In order to run the tests I first looked at using Jest as a module that I imported into my server using the runCli function from the jest-cli module.

This worked well to start with as I could use async/await to wait for the tests to run and then send the test results back as a JSON object, this didn’t scale up well, however, due to longer running tests resulting in a connection timeout.

Another issue with using Jest as a module is you don’t get all the configuration options such as runInBand , forceExit and detectOpenHandles which I needed due to having to use long timeouts on one test.

I decided in the end to just run the tests as a subprocess, this has the benefit of being able to run the exact same command being run locally (such as npm test ) and allows for the stdout and stderr to be used if need be.

Here’s a snippet of the controller code:

1server.post('/test-runs', async (req, res, next) => {
2 try{
3 console.log('Kicking off test run');
4 exec('npm test', (error, stdout, stderr) => {
5 if (error) {
6 console.error(`Error running tests: ${error.message}`);
7 }
8 console.log(stdout);
9 console.error(stderr);
10 });
11 res.send(200);
12 } catch (err) {
13 res.send(500, { error: err.message });
14 }
15 next();
16});
This controller will call npm test in a subprocess on run

How will the logs be collected?

We’re using Splunk in our infrastructure so unfortunately this wasn’t a decision I had much involvement in, instead I just added the relevant code where it needed to go and I was able to see logs popup.

If you’re using Splunk here’s a few searches I found useful:

index="name of index" | table line — This will return just the log lines so you can see (albeit in reverse) the logs as they would appear in the console

index="name of index" *Tests:* — This will get you the pass / fail stats for the test run

How will the test results be available?

As I settled on using a HTTP server to run the tests I decided to add a controller that would read the test results from a database (stored as part of the test run).

This means that no matter how many times the container is restarted, redeployed or the tests are run every test run is recorded and can be retrieved.

1var couchbase = require('couchbase');
2
3class CouchbaseReporter {
4 constructor(globalConfig, options) {
5 this._globalConfig = globalConfig;
6 this._options = options;
7 const cluster = new couchbase.Cluster(`couchbase://127.0.0.1`);
8 cluster.authenticate('genericUser', 'dontStealMe');
9 this.bucket = cluster.openBucket('default');
10 }
11
12 onRunStart({ numTotalTestSuites }) {
13 console.log(`[couchbaseReporter] Found ${numTotalTestSuites} test suites.`);
14 }
15
16 onRunComplete(test, results) {
17 const testRunId = process.env.TEST_RUN_ID;
18 if (typeof (testRunId) === 'undefined') {
19 throw new Error('No test run ID passed');
20 }
21 const bucket = this.bucket;
22 bucket.upsert(testRunId, results, function(err, result) {
23 if (err) throw err;
24 bucket.disconnect();
25 });
26 }
27}
28
29module.exports = CouchbaseReporter;
Example reporter that will send test results to a Couchbase cluster

How will different versions of the tests be managed?

One of the great things about Docker is that you’re building an artefact so versioning (as long as you don’t overwrite versions) is built into the process.

As the development team were working in semantic versioning I do too and I created a table in our Wiki that shows the different versions of containers for the different releases we are working towards.

I applied this principle to my codebase too, using a local npm registry to publish my packages to, then using a particular version in the different versions of the docker container so I could trace which docker image ran which version of the test suite.

When we needed to check compatibility between newer versions of the different components (such as moving from Kafka 0.11 to 2.1) I was able to use these versions to recreate the exact same setup and verify if the outcomes changed.

report
Photo by rawpixel on Unsplash

Running the tests and getting results

From code to results the process is:

  • Publish the tests as a npm package
  • Build a docker image with that version of the package in it (this is part of an automated build in our infrastructure)
  • Deploy the docker image into the development deployment (this is also fairly automated in our infrastructure)
  • SSH onto the jump host for the development deployment and issue a POST to the /test-runs endpoint via cURL which returns a test-run ID
  • Poll /test-runs/[id of test run] for the results using a bash until loop
1#!/usr/bin/env bash
2until $(curl --output /dev/null --silent --head --fail http://$HOST_IP:8080/test-runs/$TEST_RUN_ID) ; do
3 echo "Still waiting for Schema Registry to become available";
4 sleep 5;
5done;
Bash script to poll URL for results
  • Visiting /test-runs returns all test run data

Summary

There are a number of benefits to containerising the tests;

  • It helps to provide a test artefact that exists along side the code base
  • It’s easier to test against different environmental configurations both locally and in different deployment infrastructure
  • The test suite can be triggered as a post deployment step to provide CI within locked down environments

I’ve created a demo repo on my Github that contains some of the tools and techniques I used but on a smaller scale. Feel free to clone it and play around with it and leave a comment if you found it useful!