It’s not the known work that gets you, it’s the unknown work, as well as the compounding impact of delays and discovered work.

SUBHED: So what works, then?
There must be a better way to align scarce resources with plentiful product desire, and there is: probabilistic forecasting using system cycle time performance. Probabilistic forecasting is the tool of choice used in other risky fields to handle making better guesses (predictions) about what will happen in the future. The software development process has no shortage of uncertainty; nothing worth pursuing does. Extreme uncertainty is inherent in artistic and inventive fields, of which software development is both.

Probabilistic forecasting combines historical patterns and expert judgment to give a snapshot of the likely range of results and how likely some are over others. Any pretense of exact outcome is discarded. The goal is clearly to understand the bounds and likely outcomes; forecasts have an agreed amount of uncertainty between preparer and recipient. This shared understanding of uncertainty helps decisions be made fast with a minimal amount of estimation, if that level of uncertainty is acceptable.

Cycle time forecasting in my practice has yielded accurate results (I have case studies in a 7-15% deviation range) without story size estimates for both Scrum and Kanban process teams. Beyond the euphoria of forecasting the date confidently, the models used highlighted staffing suggestions and what factors most jeopardize delivery dates.

If it sounds complex, it isn’t. Most models combine the cycle time history with a broad guess of number of stories per feature being planned. It means less work than even a minimal Excel spreadsheet, and the models built are reusable after refreshing historical data. Investing in capturing the right historical data (scope creep, cycle times, risk events, etc.) are investments that pay huge dividends when making future decisions.

Tool vendors haven’t been asleep. They know automatic capture of historical data is key to solving forecasting accuracy, and the leading vendors are mining historical data captured in their tools and modeling quantitatively to give you possible insights on future projects. Evidence these techniques are becoming mainstream comes from the recent Lean Kanban North America conference, which scheduled six sessions dedicated to quantitative simulation and forecasting, as well as a “bake-off” between two vendors (LeanKit and Digite) regarding their Monte Carlo simulation features. The ability to have forecasts based on real historical performance baked into Agile tooling will avoid the issue of developer estimation entirely, and this is the most promising approach to #NoEstimates.

As the joke goes, you don’t have to outrun the vicious tiger chasing you, you just have to outrun your colleagues. Any improvement on forecasting just has to outperform our current point- and velocity-based techniques, which we all agree are failing us badly and costing us dearly. The quantitative forecast looks brighter and heads toward the oxygen-rich #NoEstimates ideal, while allowing companies to manage their constrained resources and cash flow.

Troy Magennis is the founder of Focused Objective, which builds tools and training for forecasting software development projects.