For the past several months, I’ve been discussing alternatives to unit tests for both designing and validating software. In a pair of columns in particular, I discussed using functional test tools for designing the code implementation.

One commenter on those columns inquired, “Why functional tests?” This is a valid question. Unit tests are simply too low-level. They test too little. And many small tests taken together tell you only that the individual grains of code work correctly. They say nothing about whether the functionality works in the large, nor whether it’s anything the customer asked for.

Functional tests should reflect a function specified in the requirements or in some kind of feature repository (these days, defect trackers fill this function all too often). Using functional tests correctly orients the design and code to the goal, namely delivering what the user wants.

Higher-level tests, such as user acceptance tests, are too coarse for my taste. One feature that a user might demand could have multiple parts consisting of dramatically different actions. For example, suppose a user acceptance test is at sub-second response time. This will surely require numerous optimizations. A single UAT of the response time will tell me whether I’ve succeeded overall, but is no use in designing and validating the individual changes I’ve made.

This view is not universally accepted. Ken Pugh, who gave one of the keynotes at the recent Enterprise Software Development Conference in California (produced by SD Times publisher BZ Media), told me in the after-party that when he sits down to write code, he always starts with a UAT as the defining goal for what he’ll work on. I trust Ken when he says this, but I don’t see how I could apply this. Much of my development work is simply not a broad-brush endeavor. It frequently includes small maintenance efforts, or optimization, and so on. These tasks rarely can be encapsulated in a UAT.

But the user orientation that Pugh finds in his approach, and that I echo in functional tests, is a key differentiator from the mainstream TDD orientation. There are approaches to testing that take the core concept of serving the user even further. Principal among these is model-based testing (MBT). I realize I tread dangerous ground here; the idea of using models is as appetizing to most readers as a tablespoon of brewer’s yeast or cod liver oil. And yet, there is much to recommend it.

In a typical scenario, a model is constructed from the user requirements. Typically, this is done using a modeling language like UML. Then the MBT software reads the model and generates tests that exercise the features. These tests occur at various levels: functional, integration and unit tests are all standard deliverables, which are then run via custom or standard test frameworks or harnesses.

Modeling for testing is easier than standard UML modeling, which involves design decisions and complex architectural planning. Instead, the model is simply a reflection of the already-captured requirements. It is essentially a process of translating one artifact into another.

The requirements-based model, however, has one supreme capability: It does pure black box testing. The model knows nothing about the implementation, nor should it. Its only job is to make sure that the requirements are met. It runs what are called conformance tests.

Because such tests can create numerous false positives when features have not yet been implemented, many MBT products enable developers to specify which aspects should be tested in any given run. But a manager can run the full suite to get a very accurate idea of where a project stands in relation to its goal. This benefit, and that of automated requirements-oriented testing, is crucial in large projects—those involving hundreds of thousands of lines and up. On those projects, MBT is just about indispensable.

MBT tools tend to be enterprise-class packages. Vendors include IBM Rational, Conformiq and Smartesting, among others. The free tools market is fairly thin. Microsoft Research offers one MBT tool called SpecExplorer, which is .NET-centric. Another is NModel, which is C#-oriented and which, alas, uses models of code rather than diagrams. The problem with this is that it’s very hard not to let the implementation bleed into the model when you’re modeling in code.

In the Java space, there is mbt, which uses digraphs to model small pieces of the logic and the Java Modeling Language. Uniformly, the free and open-source products are incomplete offerings.

Organizations that would like to test from requirements, but don’t want to jump into MBT, do have options. Principal among these is behavior-driven developent, in which software behavior is coded in pseudocode and then run as a test. easyb, a Groovy-based product, is probably one of the best implementations of this approach (and it is free).

Andrew Binstock is the principal analyst at Pacific Data Works. Read his blog at binstock.blogspot.com.