Not much effort is required to see that many projects do not turn out according to plan, whether they’re over budget, late, or otherwise unsuccessful. In the software industry, there has been a sustained research effort around software engineering and methodologies that attempt to address pervasive shortfalls in feature deliveries, levels of delivered quality, and adherence to schedules.

Given the widespread availability of data on project challenges (or complete failures), why do people and organizations persist in making these sorts of mistakes? Is the world really that complex and unpredictable as to defy any planning? Are people simply engaging in large-scale denial? Or is there a good reason to embark, knowingly and willingly, on efforts that they know will be used against these problems eventually? Before we investigate this last question, let’s turn to the common explanations for this behavior.

In behavioral economics, drawing on “heuristics and biases” research, the most common explanation for this behavior involves what is called the “planning fallacy” and its related cognitive limitations. The planning fallacy refers to the habit of underestimating the time required to complete projects. The foundational study of this fallacy by Daniel Kahneman and Amos Tversky suggests that people tend to focus on best-case plans and hence are overly optimistic.

Contributing to the planning fallacy is a series of biases where people are overconfident about overly precise estimates, thereby overestimating capabilities and control. In complex domains, where problems cannot be understood completely, the desire to exercise some level of control over circumstances is strong, and it is all too easy to believe that detailed planning results in plans that are certain.

A second explanation for pervasive project challenges is strategic misrepresentation, which involves project advocates deliberately and strategically overestimating benefits (including time to completion) while simultaneously downplaying costs. This is seen as tied to political considerations rather than cognitive limitations, as the primary motive here is to secure project support and resources in a competitive environment. Ironically, more realistic estimations would make a project seem noncompetitive in an environment where benefit inflation and cost downplays are the norm.

Given the prevalence of these behaviors, what are some common methods of combating them? A typical approach is to seek better work definitions, increasingly detailed task estimates, careful management of scope and risk during the project implementation, and other practices designed to drive predictability and certainty. A more general method is the “inside and outside” view, where attempts to remove bias in the planning process are made by taking a different perspective, perhaps involving assessments from people not directly involved with the given project.

Despite attempts to combat it, the planning fallacy remains. Why do intelligent, well-intentioned people who often have years of professional training in their respective fields fall into a trap that is seemingly so obvious? Perhaps we should consider a second key factor: These biases are systemic errors, not simply one-off mistakes. Now, instead of talking about errors in human reasoning, we’re addressing shortcomings. An interesting question then becomes: Why do these shortcomings exist?

On one hand, it’s possible that the problem domains in play are so complex and messy as to defy the type of certainty and control that we wish to attain. But even if that were true, it does not explain our reactions—the systemic limitations—in the face of such an environment. Instead of fighting the planning fallacy, what if we consider its possible advantages?

There has been concentrated research attempting to explain current human behavior as the product of adaptation to environments: the imperfect, imprecise attempts to address challenges in a given environment, rather than an exquisitely designed system providing a perfect solution to a set of problems. A recently published account of evolutionary psychology claims that behavior that appears to be irrational on the surface might exhibit what the authors call “deep rationality”—that is, behavior that is rational in light of historical challenges faced by humans. These seemingly irrational behaviors are, therefore, deeply rational when seen from a wider perspective of time, evolution and adaptation, and shed light on the planning fallacy.

When we view the planning fallacy and associated cognitive shortcuts as deeply rational, rather than trying to eliminate them, we can leverage them for the benefits they provide (motivation to action) while attempting to mitigate the degree of negative consequences. This is in contrast to many traditional project-management recommendations, which seek to replace optimistic planning with statistically-based models, thorough risk analysis, detailed plans, and tightly controlled scope during project execution.

When such attempts are applied strictly, there is a danger of holding to plan, regardless of a changing situation or market. While the planning fallacy can cause delays in project completion, this opposite situation can result in a precisely executed solution in an exact time period, but for a problem that does not exist. We then find ourselves in products looking for problems to solve rather than the desired opposite outcome.

It is natural that proposals for projects highlight the benefits to be realized, revenue to be captured, market share increases, and so on. There is a strong tendency to overemphasize these aspects, either as a result of priming the internal risk-seeking, aggressive persona that focuses on something to be won in the face of competition, or the self-protecting mindset that fears negative consequences for failure, challenges, changes, or history of similar projects.

These two personalities temper each other, and presentations advocating for a specific project should contain material targeted at engaging each. However, it must not be presented so that it can be analyzed away through risk-management or contingency plans. Rather, this information should be seen as part and parcel of the entire proposal.

This is not to say that typical project management methods for dealing with uncertainty and risk are worthless, but rather that they should not be used during project execution and not during the decision whether to undertake the project itself. Consideration of project approval needs to be done as holistically as possible.

Intentional changes in context and the framing of information are the key. Too often, those for and against a project escalate the competition in attempts to win, and they lose sight of the real consideration: whether the project should be approved or not. Requiring that proponents of a project also argue against why the project shouldn’t be undertaken is a way to balance optimism and pessimism, much like exercises done by students who are required to argue both for and against a given thesis. By bringing both perspectives into play, decision makers can have a more complete and informed view of the choice at hand.

John Graham is director of software engineering for the Red Hat JBoss Middleware integration product line. He is the founder of the Eclipse Data Tools Platform project.