Requirements engineering is like unicorn hunting. It sounds good, but it simply isn’t real. Requirements, like unicorns, are a myth. But unlike unicorns, requirements are dangerous.

In the “United Nations Experiment,” psychologists Daniel Kahneman and Amos Tversky had participants spin a “wheel of fortune” to get a random number, and then asked participants to estimate the percentage of African countries in the United Nations. Unbeknownst to the participants, Kahneman and Tversky had rigged the wheel so everyone got either 10 or 65. Participants who got the higher number guessed 20% higher, on average, than those who got the lower number, despite the number appearing both random and unrelated to the question.

This experiment illustrates the powerful subconscious effects of small changes in our environments, even when we know better. Specifically, it illustrates the power of anchoring bias, the tendency to base estimates on available “anchors.” Anchoring bias is just one of many systematic deviations from optimal judgment referred to as cognitive biases. And cognitive biases are what make the requirements myth so dangerous.

As students, we received assignments with conditions like “write at least 2,000 words,” “cite at least five academic references,” or “your code must compile without errors.” These were requirements in the sense that, if your assignment did not meet them, you would fail, or at least receive a lower mark than otherwise. “Requirement” literally means “a necessary condition for success.” In the language of epistemology, requirements imply a counterfactual approach to causality; that is, R causes S if S cannot occur without R. In this case, S is success and R is a requirement.

Therefore, when a person or document states that feature X is a requirement, that statement is doing a lot of work. Specifically, it implies: 1) we know the goal of the system; 2) we have identified every possible system that will achieve said goal; 3) every system capable of achieving the goal has feature X. Unless all three of these conditions hold, X is a design decision, not a requirement.

This illuminates three problems. First, many software projects begin with unknown, ambiguous or conflicting goals. Trying to state a set of necessary conditions for success where we can’t define success is meaningless. Second, even if we know the goal, we cannot predict with a high degree of accuracy whether a proposed system will achieve the goal. Third, even if we were reasonably sure that the proposed system will achieve the goal, we have no way of knowing that there isn’t another system that would also achieve the goal without feature X. Presented this way, most software projects obviously have no requirements, or too few requirements to drive the design process.

Most of the “requirements” attached to contemporary software projects are obviously not necessary conditions for success. Of course, good analysts, managers and developers know (consciously) that not every “requirement” in some document is an unquestionable success factor. The problem is that calling design decisions, preferences and other desiderata “requirements” (subconsciously) taps into a wide variety of cognitive biases that hinder critical thinking and impede creativity, including:

1) Status quo bias: the tendency to irrationally prefer the status quo

2) Confirmation bias: the tendency to pay more attention to ideas consistent with your beliefs and less attention to ideas inconsistent with your beliefs

3) System justification: the tendency to irrationally defend the existing system

4) Design fixation: “blind adherence to a set of ideas or concepts limiting the output of conceptual design”

Theoretically speaking, the requirements illusion causes developers to unnecessarily constrain their conception of the design space, thereby undermining their own innovative capacity. Luckily, the requirements illusion has a simple cure. Analysis should focus on operationalizing the problem or goal and building a nuanced understanding of the domain. Based on this, developers should simply propose system features and structures that seem likely to achieve the goal. Intermediate requirements analysis is more likely to undermine innovation than reveal solutions.

Dr. Paul Ralph is a lecturer in Design Science at the Lancaster University Management School, and director of the Lancaster University Design Practices Lab. His work focuses on the empirical study of designers, including their practices, processes, cognition, tools, management and environments. He holds a Ph.D. in information systems from the University of British Columbia.