Getting started with TDD
Testing your code is a great way of avoiding bugs, but it's tempting to put off writing tests until you have something which works. At that point, it may be very difficult to test your code, especially to write unit tests, because you may not have written the code with testing in mind. One way to avoid this situation is to use Test Driven Development (TDD).
What is TDD?
TDD is a way of developing code by writing tests before the code to be tested. This way, you can be sure that your code can be tested. Also, you'll know that your tests are valid because they'll start off failing (a test which never fails is useless!).
How do I start doing TDD?
The basic process is easy: 1. Write a failing test 2. Make the test pass, together with any other tests 3. Refactor to make the code clean (described below) 4. Repeat as necessary.
A failing test might not even compile if it's calling some code which hasn't been written yet. That's ok.
This process is sometimes called “red, green, refactor” after the colours that some tools use to flag failing and passing tests.
A unit test tests some code in isolation (from other code, the internet, a database, etc.). Code composed of small modules is easier to test, if each module can be tested in isolation. It's then possible to drive unusual and error paths as well as the happy paths through the code.
The trick is deciding the size of “unit” to test, e.g. it could be a single module or a group of modules. If the unit size is larger, there's more scope for refactoring without needing to change the tests. If it's smaller, then moving code between modules and changing interfaces tends to require test rework.
The term refactoring is used in two different ways. The first kind of refactoring is to start with a piece of code and all its tests passing and then to make essentially arbitrary changes to the code, often in small steps, ensuring that the tests continue to pass.
So, starting with all the tests passing, the process is: 1. Change the code 2. Run the tests again 3. If some fail, fix the code or undo the change until all the tests pass 4. Repeat as necessary.
The second kind of refactoring is to make specific changes to the code which are known to preserve the behaviour. Sometimes this can be assisted by IDEs or editors with automatic “refactorings”, such as “extract method”, “rename variable”, and so forth. After doing this kind of refactoring, it's still worth checking all the tests still pass.
Strictly speaking, if you modify tests, you should check they still catch the failures they were originally written to catch, but that's a real pain as you would have to temporarily break the code under test to provoke each modified test to fail. But without this, it's theoretically possible to mess up a test change and end up with a test which passes when it shouldn't. An extreme example of this would be to delete the code inside a test, which would obviously then pass, but be useless. The issue is that people might be tempted to hack their test code around on the assumption that running all the tests and seeing them pass is some sort of safety net when, in fact, it isn't.
The best way to avoid modifying tests excessively is by testing larger units. If you do need to modify tests, then doing a series of correctness-preserving refactorings reduces the risk of invalidating a test. But the rule of thumb is to be extra careful when modifying tests.
Do I need to follow TDD strictly?
It's quite a good discipline to follow the strict TDD approach for a while to get the hang of it. But after that, I think it's fine to be a bit more relaxed.
For instance, if you're trying to get a piece of code working, it may be more appropriate to code up a prototype as a “proof of concept” and then go back and develop some code using TDD now that you know roughly how the code will work. This approach is sometimes called “fire, aim, ready” (reversing the well-known phrase “ready, aim, fire”) meaning get something working first, then understand the problem better, then start development proper.
I have also used code coverage tools to make sure most, if not all, of my code is tested. You don't need to hit 100% coverage, but something like 80-90% is clearly a better sign of a good test suite than 20-30%.
Does TDD guarantee my code will be well designed?
No! Don't be caught by the trap of thinking TDD will necessarily give you a great design. It's a good way of avoiding untestable code, but it doesn't guarantee clean, understandable interfaces etc. Using TDD to get a good design is, as Rich Hickey once described, like driving a car along a road and bashing against the crash barriers (“guard rails” if you're American) as a way of getting to where you want to go.
Is TDD worth it on small, personal projects?
I've worked in teams, one doing pretty strict TDD, but on personal projects, I tend to unit test the stuff which I'm most likely to get wrong and with the most conditional logic. (I must confess I don't tend to invest in integration tests for personal projects as it's usually too much bother.)
Hacker Noon's Introduction to Test Driven Development gives a bit more detail on some of the above and links to some practical examples.
If you want a book on TDD, Growing Object-Oriented Software Guided by Tests is a great introduction and reference.
Rich Hickey's excellent talk Simple Made Easy mentions “guard rail programming” in the segment at 16:08 (see the “Show notes”).