As I have pointed out in my recent columns, the world of agile has suddenly become very focused on test-driven development. Within that community, the tug of war between traditional code-then-test and TDD seems to have been won by the TDD practitioners, at least in terms of mind share.

I continue to be surprised how often I hear people who claim to use TDD and then specify that they mean “test first.” It’s hard to fathom how someone could be doing anything close to TDD if they were not doing test-first, so the additional specification suggests that there are divergent views within TDD regarding exactly what practices the initials stand for.

A recent comment on my blog inveighed on this very point, trying to get users to refer strictly to the core TDD practices when using the term: Write a failing unit test, write the smallest amount of code that will enable the test to pass, refactor. Using other imagery, it could be stated as: Lather, rinse, repeat. The problem with insisting on this narrow restriction is that even dyed-in-the-wool TDD experts don’t adhere without variance to this series of steps.
 
For developers hoping to adopt TDD, there are surprisingly few good books on the topic. Kent Beck’s seminal work, “Test-Driven Development: By Example,” anchored the idea in the agile consciousness. And David Astels’ “Test-Driven Development: A Practical Guide,” built elegantly on that foundation.

A recent title now joins these standout works. It’s the descriptively named “Growing Object-Oriented Software, Guided by Tests,” by Steve Freeman and Nat Pryce. It would be my starting point today if I were undertaking TDD. It not only explains the whole testing orientation, but it also shows how to choose tests wisely and use them to move forward according to plan, rather than organically.

It also has a strong emphasis on the refactoring aspects, with an eye to the kinds of refactoring typical of the TDD approach (a woefully under-discussed topic). Note that for teaching TDD to students new to programming, I would use Jeff Langr’s “Agile Java.”

As I have worked through these books over the years, I’ve been chased by doubts. I think TDD has the basic concept correct—that writing a test first forces a discipline on the developer that improves code quality. If developers could be disciplined enough to plan code thoroughly before writing it, TDD wouldn’t be necessary, but as a group we’re not, and so it is.

While TDD’s concept is valid, I think the implementation is faulty. The steps, as I outlined above, say to write a failing test. No one type of test is specified, but the unit test is universally understood and implemented. This for me is the error. Unit tests are simply too small increments.

Increasingly, I have been testing TDD by writing a failing functional test and writing just enough code to make it pass. This is an important distinction. Functional tests are higher level and in general encompass more functionality. Writing good functional tests in the absence of code is an excellent way to capture requirements, explicit and implicit.

The tests use the same techniques as TDD, such as mocks in the test cycle and refactoring afterwards, but the increment of code written is much larger (thereby reducing the constant refactoring). And the shape of the resulting code follows the morphology of the requirements directly.

As I code this way, I have every confidence I am delivering what the customer asked for (to the extent my specs reflect those features). Cédric Beust, who wrote the open-source TestNG framework, pointed out in a recent interview: “Unit tests are a convenience for you, the developer, while functional tests are important for your users. When I have limited time, I always give priority to writing functional tests. Your duty is to your users, not to your test coverage tools.”

By relying on functional tests, I avoid a common problem that derives from an exclusive focus on unit testing: Namely, that while the units work correctly when tested individually, collectively they have defects. (This problem is often recast in the correct assertion that just because you have high code coverage from unit tests, it doesn’t mean your code works correctly.)

My increasing preference for functional TDD does not preclude unit testing. I rely on unit tests to verify code after I’ve written it. For example, unit tests verify that I am handling edge conditions, exception handling and other aspects correctly. And they are invaluable in this capacity. Between the functional and unit tests, I have a high degree of confidence in my code.

In my next column, I will discuss the implementation aspects of what I call “functional TDD” and discuss some of the tools I use to make it work in Java.

Andrew Binstock is the principal analyst at Pacific Data Works. Read his blog at binstock.blogspot.com.