If passion were a measure of utility, the proponents who believe in #NoEstimates (the twitter hashtag representing the “No Estimates” movement) would be more valuable than oxygen. They believe there are better ways to manage and fund projects that don’t require pulling your best and brightest from their real job to estimate (and re-estimate) how long each part of a software project will take.

It’s hard not to be with them in spirit. Anybody who has spent time in the software industry has felt the futility of estimating raw ideas that will inevitably change over time and evolve toward something far different than the original single-line description. But we can’t forget that we are being paid by a business (someone else’s money), and all businesses have limitations of one or more resources—people, money or time—which must be spent in the most effective manner possible.

And it is here that some form of forecasting is required for new ideas, to make sure that the expenditure of scarce resources are yielding the biggest bang for the buck. Impact of potential revenue or cost savings needs to be assessed over a timeframe, and the cost to achieve those benefits needs to be broadly understood. Someone needs to plan and understand cash flow to keep the company thriving. Enter the estimate.

Blame for poor estimation outcomes is wrongly placed. Poor mathematics, failing to estimate the impact of risks and dependencies, and ignoring uncertainty (entirely) are having much more impact than a few developer stories being underestimated. Narrowly defining estimates to “developer story size,” and then recommending doing away with all forms of estimation, is a shortsighted approach.

There are good reasons to avoid estimation, the most striking being when the work is a foregone conclusion. Extending “No Estimates” to all possible work is nonsense unless there is truly no option of altering staffing levels, or reordering feature priorities based on time to market. In the idea-rich software atmosphere, there is always more demand than supply.

Why do current estimate techniques fail so badly? Looking at the most common technique of story points and velocity projection, we start to form a picture of hopelessness. If a team is stable in size, has little turnover in experience level, no external dependencies or risks, and all scope is known upfront, then maybe, just maybe, velocity will give an accurate forecast.

The concept behind velocity is pure simplicity and genius; it’s a way to convert one unit of team performance and effort into a unit of time. Problem is, it doesn’t cope well with the kind of changes in the (abbreviated) list from a few sentences ago. Another shortcoming is it’s an average linear projection and becomes inaccurate when modeling a non-linear process, such as team ramp-up time, risks, and external team dependencies that occur rarely but wreak vengeance on velocity when they do—much more significant than a few inaccurate developer estimates.

The forecast we need is how long it takes work to flow through the entire organization, as most aspects beyond pure coding time have little to do with the work complexity. Waiting for a tester or production hardware isn’t correlated to small, medium or large story size values. Velocity assumes there is.

It’s not the known work that gets you, it’s the unknown work, as well as the compounding impact of delays and discovered work.

SUBHED: So what works, then?
There must be a better way to align scarce resources with plentiful product desire, and there is: probabilistic forecasting using system cycle time performance. Probabilistic forecasting is the tool of choice used in other risky fields to handle making better guesses (predictions) about what will happen in the future. The software development process has no shortage of uncertainty; nothing worth pursuing does. Extreme uncertainty is inherent in artistic and inventive fields, of which software development is both.

Probabilistic forecasting combines historical patterns and expert judgment to give a snapshot of the likely range of results and how likely some are over others. Any pretense of exact outcome is discarded. The goal is clearly to understand the bounds and likely outcomes; forecasts have an agreed amount of uncertainty between preparer and recipient. This shared understanding of uncertainty helps decisions be made fast with a minimal amount of estimation, if that level of uncertainty is acceptable.

Cycle time forecasting in my practice has yielded accurate results (I have case studies in a 7-15% deviation range) without story size estimates for both Scrum and Kanban process teams. Beyond the euphoria of forecasting the date confidently, the models used highlighted staffing suggestions and what factors most jeopardize delivery dates.

If it sounds complex, it isn’t. Most models combine the cycle time history with a broad guess of number of stories per feature being planned. It means less work than even a minimal Excel spreadsheet, and the models built are reusable after refreshing historical data. Investing in capturing the right historical data (scope creep, cycle times, risk events, etc.) are investments that pay huge dividends when making future decisions.

Tool vendors haven’t been asleep. They know automatic capture of historical data is key to solving forecasting accuracy, and the leading vendors are mining historical data captured in their tools and modeling quantitatively to give you possible insights on future projects. Evidence these techniques are becoming mainstream comes from the recent Lean Kanban North America conference, which scheduled six sessions dedicated to quantitative simulation and forecasting, as well as a “bake-off” between two vendors (LeanKit and Digite) regarding their Monte Carlo simulation features. The ability to have forecasts based on real historical performance baked into Agile tooling will avoid the issue of developer estimation entirely, and this is the most promising approach to #NoEstimates.

As the joke goes, you don’t have to outrun the vicious tiger chasing you, you just have to outrun your colleagues. Any improvement on forecasting just has to outperform our current point- and velocity-based techniques, which we all agree are failing us badly and costing us dearly. The quantitative forecast looks brighter and heads toward the oxygen-rich #NoEstimates ideal, while allowing companies to manage their constrained resources and cash flow.

Troy Magennis is the founder of Focused Objective, which builds tools and training for forecasting software development projects.