When it comes to software estimation, it’s relatively simple to calculate the completion time for smaller-scale projects that take less effort. However, for larger or more complex software projects, it can be difficult to predict just how much time and effort it will take to reach the end goal. The vast majority of software estimators are too optimistic and overconfident, which means they tend to underestimate things, and projects end up taking longer and costing more than originally anticipated. Overestimation isn’t much better – you end up reserving resources that could have been invested elsewhere. Either way, mistakes in software estimation cost organizations money.

The good news is that there are ways to get better at software estimation, not only through practical experience but also from purposeful training. After reading “How to Measure Anything” from management consultant, speaker, and author in decision sciences and actuarial science Doug Hubbard, I began using two of his techniques to train my development teams in the art and science of software estimation: calibration training and the 90% confidence interval.

Quantify Your Bias and Correct for It

One key problem I’ve encountered throughout my career is that developers often lack awareness of their natural bias when attempting to estimate software features. They have a tendency to either overestimate or underestimate projects, without even realizing it. But, if their bias can be recognized, quantified, and corrected for, they can become better estimators over time. This is what Hubbard refers to as calibration training.

As vice president of engineering at a late-stage startup, I found that the team was constantly struggling with estimating software features and projects. So, I conducted a calibration training exercise in which I asked team members a series of questions – some were hard, some easy – so that most of the answers were guesses. We then determined if they were usually overconfident or underconfident, and by how much, to get an initial sense of their bias and measure the gap. They then used that understanding to adjust, or calibrate, future estimates. Over time, they overcame their bias and improved their estimation accuracy significantly with this technique. 

Estimate in Ranges, Not Absolutes

Another troubling trend is when developers focus on getting their estimates correct to an absolute number and are far off when the actual effort is put in. This often happens when you have “no idea” or – seemingly – nothing on which to base your estimate. However, using the confidence interval technique, you can learn to give a more accurate estimate in terms of a range, instead of an exact number. The goal is to have 90% confidence that the actual number is within as narrow a range as possible. 

Let’s say, for example, that you are answering the question, “What is the population of Italy?” Someone who is not familiar with the geographic specifics might say, “I don’t know.” However, if you rephrase the question as “What is a range, in which you have 90% confidence, that covers the population of Italy?” you can use information you already have to set a minimum and maximum, narrowing the range while keeping that 90% confidence. I used this exact question with my team to help demonstrate the concept, and the results were eye-opening.

To dive deeper into this exercise, we started at the lower end of the range, asking some ridiculous questions. “Is the population of Italy more than 1 million?” Of course it is. “Is it more than 10 million?” Most team members knew that the population of the San Francisco Bay Area alone was around 7 million, so they could say with 90% confidence that the population of Italy was more than 10 million. “Is the population of Italy more than 20 million?” “Is it more than 30 million?” At the 30 million mark, their confidence waned, and we found our lower end of the range.

To find our maximum, again, we worked with what we already knew. The population of the United States was more than 300 million, so Italy’s population “must be less than 300 million.” “Is it less than 200 million?” “Is it less than 100 million?” Eventually, around the 75 million mark, confidence fell below 90%, and we found the higher end of our range. It turned out that the actual population of Italy was around 60 million. A range of 30-75 million is a solid estimate, and certainly more accurate than “I don’t know.” Taking this approach to software estimation can be transformational in terms of improving accuracy.

Practice the Skill of Software Estimation  

While software estimations can be made accurately based on previous projects that were similar in scope, software estimation is a skill that can be taught to development teams through training in techniques like 90% confidence interval and calibration. In addition to Hubbard’s book, the Credence Calibration Game published by Andrew Critch, PhD at the University of California, Berkeley, is a great resource to help improve your confidence interval and estimation accuracy. As with any skill, the more you practice, the better you get at software estimation. Investing in these tools and techniques can not only increase the value of individual engineers who want to hone their craft but also help software teams deliver projects on time and on budget for greater business success.