For the past several months, I’ve been discussing alternatives to unit tests for both designing and validating software. In a pair of columns in particular, I discussed using functional test tools for designing the code implementation.
One commenter on those columns inquired, “Why functional tests?” This is a valid question. Unit tests are simply too low-level. They test too little. And many small tests taken together tell you only that the individual grains of code work correctly. They say nothing about whether the functionality works in the large, nor whether it’s anything the customer asked for.
Functional tests should reflect a function specified in the requirements or in some kind of feature repository (these days, defect trackers fill this function all too often). Using functional tests correctly orients the design and code to the goal, namely delivering what the user wants.
Higher-level tests, such as user acceptance tests, are too coarse for my taste. One feature that a user might demand could have multiple parts consisting of dramatically different actions. For example, suppose a user acceptance test is at sub-second response time. This will surely require numerous optimizations. A single UAT of the response time will tell me whether I’ve succeeded overall, but is no use in designing and validating the individual changes I’ve made.
This view is not universally accepted. Ken Pugh, who gave one of the keynotes at the recent Enterprise Software Development Conference in California (produced by SD Times publisher BZ Media), told me in the after-party that when he sits down to write code, he always starts with a UAT as the defining goal for what he’ll work on. I trust Ken when he says this, but I don’t see how I could apply this. Much of my development work is simply not a broad-brush endeavor. It frequently includes small maintenance efforts, or optimization, and so on. These tasks rarely can be encapsulated in a UAT.
But the user orientation that Pugh finds in his approach, and that I echo in functional tests, is a key differentiator from the mainstream TDD orientation. There are approaches to testing that take the core concept of serving the user even further. Principal among these is model-based testing (MBT). I realize I tread dangerous ground here; the idea of using models is as appetizing to most readers as a tablespoon of brewer’s yeast or cod liver oil. And yet, there is much to recommend it.
In a typical scenario, a model is constructed from the user requirements. Typically, this is done using a modeling language like UML. Then the MBT software reads the model and generates tests that exercise the features. These tests occur at various levels: functional, integration and unit tests are all standard deliverables, which are then run via custom or standard test frameworks or harnesses.
Modeling for testing is easier than standard UML modeling, which involves design decisions and complex architectural planning. Instead, the model is simply a reflection of the already-captured requirements. It is essentially a process of translating one artifact into another.