bsdpower.com

Test driven development and practicality

I had a conversation at one point with a startup founder who said he did test driven development 100%. There was no exceptional code that was not to be tested first.

I tend to not do TDD. Maybe because I don't think in the way that is most condusive to TDD. I like to (build things)[/i-learn-by-doing/] more than I do to reason about them. This is a weak argument in the eyes of TDD proponents; they would counter that my way of developing software must be wrong.

Recently however I have been able to add some meat to the bones of my argument. Consider a "hello world" program. This is a program whose only purpose is to print the string "hello world". Does it make sense to create a test for this program, before writing the program itself?

If the program is written by a computer science student, ran once and never used again, I would argue that the utility of a test for this program is nil. Running the program would either render correct output or not. Running the program is simple. A test adds nothing to the value of the program.

Now, if the program is meant to work on, say, different architectures, and be used by next year's computer science students, then perhaps it does make sense to write a test to ensure that next year the program still works correctly.

The ultimate purpose of writing tests is not to have tests or a certain test coverage. The purpose of tests is to have a better program. "Better" can mean any number of things - more stable, less buggy, developed faster. Regardless, tests are subordinate to the program they are testing.

Tests, like code, take time to write. The time taken to write tests is meant to pay off in the long term as bugs in the program are easier to find and subsequently fix, or said bugs are found by developers while the code is written rather than by users. But the law of diminishing returns applies: the more tests there are, the less benefit additional tests offer. At some point the benefits do not offset the time spent writing the tests.

Consider authentication for example: it is possible to write tests for each action/url in a web application checking that access to said action is either permitted or denied for each of the possible user types. In an application with complex access control, this might make sense. In an application where all users are the same and a user must be logged in to do anything other than view the login page checking each action is likely to be a waste of time.

Then there is the issue of what to consider sufficient test coverage. Given a mature application, adding an integration test for new functionality which is not yet written would result in that test failing. To make that test pass one would need to write a lot of code. Should functional tests be written in the meantime? Unit tests?

Suppose I could write bug-free programs. What would the benefit of tests be for such programs? Certainly tests would not be very useful for the purpose of assuring the program works as intended. Tests could still be useful as documentation, but then I could simply write documentation.

In practice, I cannot write bug-free programs. But there are code fragments that I have good confidence in to be correct. I could write tests for them, or I could write functionality that users or stakeholders would actually use. Users cannot do anything with a test suite, they can only use the program. My emphasis is on value delivered to users, and sometimes tests help the value whereas at other times they don't.

It is notable that TDD proponents are typically using a framework that makes writing tests easy - like Ruby on Rails. What if instead they were tasked to, say, develop a site based on a PHP content management system that was extremely large and complicated yet had no test coverage? They would have had to develop their own test framework and write tests for the CMS before they could start the project they were actually tasked to do! Depending on the complexity of the project, the customer may have been happy to manually check that "it works" rather than wait months and spend heaps of money building test infrastructure.

I have a friend who is a big proponent of rapid iteration, even if such rapid development results in backwards incompatibility. He writes a lot of throwaway code whereas I write nearly none. He tells me that I write too many tests and value stability too much. When we talk about this issue, I don't claim that throwaway code is bad, or that rapid iteration is bad. I only claim that rapid iteration on frameworks and libraries that are used by long lived projects creates pain for such projects, and that I tend to work on stable and long lived projects where writing throwaway code would, over the lifetime of the project, be a waste of effort. If we both had to write a prototype for something, I would not be surprised if he built his quicker than I built mine while still meeting all of the requirements.