Metrics have become a standard accoutrement of all application life-cycle management tools. Every management package has some form of dashboard that enables users to see at a glance the health of the project. Or so it’s promised.
The problem is that there exists little guidance regarding which metrics to watch, which ones to ignore, which should be checked daily, and which you can peer at monthly. This lack of knowledge is compounded by the presentation of nearly useless metrics.
In this latter category, I include the trend graphs generated by most CI servers. With few exceptions, they favor telling managers the historical percent of broken builds accompanied by a graphical history of build times. This is a clear case of data that is presented simply because it’s easy to show, rather than for its intrinsic value. (Most managers only care if the last build broke, not how the current failure rate ranks all time for the project. Likewise, build times are only valuable in the context of the last few builds, not in comparison with every build the project has seen.)
Presuming useful numbers, there are two kinds of metrics: those that can be evaluated in absolute terms, and those that are best interpreted in relation to other metrics. In addition, metrics can be grouped by subject area. In his book “Automated Defect Prevention,” Adam Kolawa suggests the following: requirements metrics, code metrics, test metrics, defect metrics, and a set of numbers that attempt to establish the quality of project estimates.
In many of these categories, standalone metrics provide a convenient, informative, at-a-glance snapshot. For example, LOC—a number that has little value in terms of assessing developer productivity—is an effective tool for understanding other metrics. Almost any code or testing metric that suffers a sharp spike or sudden drop requires a look at total LOC to be understandable. So, LOC should be monitored for any sudden changes.
Requirements metrics, which measure the number of requirements that have been implemented and tested, are also valid standalone metrics. As is the case with LOC, their value is maximized when measured over time; the trend is your friend.