As developers we've all heard people sing the praises of Test Driven Development (TDD) but it's not so easy to just do it. Depending on the organisational setup, size of and experience level within the team there are a few barriers to overcome before testing is easy enough to have it drive the development of the product.
Here's my thoughts on what a team needs to enable Test Driven Development so they can use in their everyday development.
Garbage in, Garbage out
Just like a computer program if you give a team malformed input then you shouldn't be expecting the end result to do everything you were thinking of. Most agile methodologies/frameworks have some notion of a development ready stage that helps bring this into the general development process but it's still hard to get it right.
This is why it's important for all members of the team (not just development) to attend the introspective aspects of these frameworks so the criteria for development ready can be improved.
If you and your team depend on another party such as a Product Owner to specify the work that you do it's important to understand what they consider to be a good specification.
The specification is sometimes seen as a way of recording do this thing but really it's more of a way to record the conversations between the business and development team that led to the feature meeting the development ready criteria.
I've been part of a team where the person writing the specification was also the person who defined what was considered development ready and ultimately their specifications were a few sentences describing the work to be done. On looking into how the specifications affected the work we did that we realised the following:
- The development team would spend ages working with the Product Owner to understand how the feature they wanted would work in certain situations
- The product owner would add or change aspects of the feature on the fly
- Due to a lack of clear vision of the work tests were either not written or only covered the happiest of happy paths, no sad or alternate ones
To remedy this we implemented a set of criteria to ensure that all specifications met a holistic sense of development ready which included:
- A description of the work
- UX/UI mockups and/or user stories to help visualise the way the feature would be used
- A brief description of how the work would be carried out to help encourage discussion within the team before accepting the work
- A set of functional acceptance criteria that covered happy, sad and alternate paths (these really made unit testing easier)
- A set of non-functional acceptance criteria if applicable
- An impact assessment from the Product Owner and the development team into how the changes might affect any existing functionality and the code base
Most importantly we made time to ensure that these criteria were met which meant that we had a stronger sense of what needed to be done and ultimately the tests to write to ensure it was working as intended.
Bring Quality Assurance into the specification
Quality Assurance plays a key role throughout the development process. Unfortunately it's often seen as a post-development activity to try and iron out any bugs, but ultimately it should be there from the start making sure those bugs don't get introduced in the first place.
Test Driven Development and Behaviour Driven Development are essentially this a way of embedded Quality Assurance into the development process. In validating our code against a set of tests or scenarios we're making sure that we're not introducing any odd behaviour, causing regressions or simply misinterpreting the specification.
This validation becomes much easier and more viable when the Quality Assurance team are allowed to add and review acceptance criteria as they will be able to help the development team understand the different ways to test the feature they are building.
Normally the Quality Assurance team will also have some work they will need to carry out to accommodate the changes being made such as updating test suites or manual testing playbooks so it's important just like with the development work that they are given the time to understand what the work being done is.
Think about the impact the changes will have
One of the downsides of Test Driven Development (and to a much lesser extent Behaviour Driven Development) is the maintenance of massive test suites. Test suites deserve the same amount of effort as the actual application code and if you don't take the time to refactor often then you end up with a huge, fragile beast that makes TDD a drag.
The Red, Green, Refactor loop works wonders here as it provides a pattern for ensuring your tests are up-to-date as you refactor your tests (causing the tests to fail) then you refactor your code to meet the tests (and it should be in that order, otherwise it's Development Driven Testing!).
One way to avoid this issue is to ensure that your code is in cohesive modules with a clear dependency structure. This allows for the integration tests to have fixtures that will build on the installed fixtures from lower level modules and helps keep the tests focused around the module and it's dependencies. This dependency chart can also be referenced during impact assessments when specifying new features or changes.
If you are able to understand what will be impacted by changes to code that has dependencies then you can estimate the work better. The extent of what depended on this code will only be revealed when you change it so when you change the initial tests in the module that code sits in you won't see any issues further up the dependency chain.
This will then lead to more time being sunk into fixing those higher level modules. However if you think about what will need to be changed at the start you can change the tests in those modules too and get a view on the impact of that code change much faster.
Practicing Test Driven Development
Test Driven Development like any practice requires discipline, you only get out of it what you put into it so it's important to stay focused if you want to get the most out of it.
If you're working in a team, especially a big team or one that hasn't yet established it's culture you may find the adoption of TDD is quite slow. This isn't the fault of the team but more a side effect of the way that teams work. For TDD to be adopted properly by the team they need to see the benefits themselves and want to adopt it, all you have to do then is to fan those flames and unblock anything that is getting in the way of it.
That being said there are sometimes people within a team who for whatever reason do not like the idea of TDD, I've worked with someone like this in the past. The reason they didn't see the benefit was due to the fact that it was not just a new technique they were learning but a change in their way of thinking, instead of just working on what they were told they were having to think about how to validate what was being built.
The way this was resolved was that everyone else was adopting not only TDD but a more scientific approach to programming and the more people spoke about this approach the more accustomed to it the problematic person came and ultimately they adopted it too.
Retrospectives and other introspective activities are good for helping to ensure the culture of the team is growing in the way you want. While everyone is talking about the way they conducted the work it's easier for them to discuss the approach to TDD and help people who might need further persuasion see the benefits.
Depending on how your team works you may or not have experienced this. It's the sprint initiation meeting, the Product Owner states the work they want to put into the sprint, the team estimates each item put into the sprint backlog and it's go time.
One problem with this is that no-one from the development team has asked any questions, sure most of the tricky ones would have been brought up in a backlog grooming session but it's important to have a discussion about what is to be done so we have it fresh in our minds when we do it.
The same way that user stories are really just place holders to help people remember the conversations they had around a particular feature tests too are a way of helping people remember the way the feature should act and why.
It's very important that tests are organised and labelled properly not only so you know where to find things but also so others can if for whatever reason they inherit your code.
One system I've found that helps is:
- Create a file for each test suite
- Generally you'll want one test suite per class or per class method (depending on size and functional boundaries of class/methods)
- Each test suite should tell a story of what the code wants to do so if there are logical breaks in the story then use this to break up the test suite
- File names should generally be along the lines of test_[class]_[scenario]
- Test case names should be along the lines of test_[method]_[action]
- Use the docstrings to give a detailed explanation of what the test is testing as well as links to any additional helpful info such as dependent modules/code etc
One other key thing to do to increase the communication around testing is to make sure you include your tests in the code review. It's easy to focus only on the code being run by the app but that's not the only code that was written and ultimately the tests are the key to another developer getting a fuller sense of what it's meant to do.
Lastly if you have Quality Assurance colleagues then they will prove to be a wealth of testing knowledge as they will be able to provide insight into writing better tests, integrating your tests into their work and also help with links to material around writing better tests.
As this article is more about what you can do to enable Test Driven Development I'll try and not go too deep into this and instead save that for it's own blog post.
Writing tests isn't the easiest thing to get your head around especially if you're a self taught developer like myself, I know I struggled to understand why my beautiful test was scoffed at and the words 'non-deterministic' came out of a colleague's mouth.
Much like anything there are good, bad and ugly (well alternative) ways of testing, and most are down to personal experience and preference but here's some general tips:
- Make sure every test case is testing one thing
- Learn the difference between unit and integration tests and why they matter (Martin Fowler has a good article on this)
- Make sure that every test case can be run independently and is only dependent on the things being triggered in the setup of the test suite
- If you need to do sequential testing you can inherit another test suite as this will run the tests in both test suites (this is good for avoiding duplicate tests and code too)
- Even if you haven't implemented the functionality yet still write the tests as it will help you visualise how to implement the feature
- Run the full test suite before pushing and merging (Travis CI is good for this if you use GitHub)
- Learn the how to use the different types of test double (Martin Fowler did a good write up on this)
- Have a set of scripts to generate fixtures for modules you're building as this helps for integration testing and also load testing
- While it seems like the easiest way to test most of your app Selenium isn't always the best tool to start with
- Build your tests so they can be reused in other ways such as smoke tests during deployment or broken out into libraries for load testing / selenium testing
- Don't get bogged down in things like BDD/Gherkin if you don't have a good level of unit/integration testing
Test Driven Development is a good practice to pick up and has really saved me hours of painful debugging but if only really affective when you treat it like the rest of your code and you make an effort to maintain your tests. Here's the TL; DR of this article:
- Work with the Product Owner to ensure that the specifications coming from them have all the things you need to make TDD a viable activity during development
- Work with Quality Assurance to learn from their years of experience in testing so you can write better tests
- Include test code in your code reviews and treat test code on the same level as application code
- Don't expect to adopt TDD overnight, sometimes it takes time to adopt and some team members may be apprehensive due to feeling anxious about testing
- Ultimately adoption will happen because the team want to adopt it not because they are told to so enable them to adopt it and evolve their use of TDD