In recent columns, I’ve been talking about language and paradigm selection from the standpoint of the developer. I’ve pointed out that functional programming techniques are already more widespread than generally acknowledged and predicted that FP will grow in influence. I’ve also tried to argue that FP has some major drawbacks, such as a lack of great communicators and a willingness to promote silver-bullet thinking, which might keep functional languages from “crossing the chasm” and doom them to remain what they are now: a highly influential niche.
This developer-centric discussion might lead some readers to think that I’m recommending that development teams embrace Haskell, Scala or F#. And while it’s true that I think that many developers would enjoy and benefit from learning a more “pure” functional language, and that the benefits (and shortcomings) of a language often only become apparent when solving real-world problems, what’s good for an individual developer is not necessarily good for the organization. At the organizational level, I think it’s still quite rare for a functional language to be appropriate for core development.
While “write a project using F#, commit it to version control, ask forgiveness” may be an acceptable approach for a few scripts relating to DevOps or system administration, it’s most certainly not an appropriate way to approach choosing a programming language for an enterprise system.
Generally, selecting a programming language is one of the riskiest choices that a program manager makes, as it binds the team to a technology environment: vendors, libraries, conferences, educational resources, etc. Selecting a programming paradigm is even more dramatic.
Putting aside for this column the technical and conceptual pluses and minuses, I want to concentrate on the risk aspect. It’s been said that the job of software project management is the job of risk management. That goes a little too far, but it has far more than a grain of truth to it. The current vogue in technical project management, driven by Apple’s dazzling decade of innovation, elevates vision and execution above all else, and the Devil takes the hindmost. (Or, as Ricky Bobby so eloquently put it, “If you ain’t first, you’re last.”)
Most of the lists of risks I’ve seen in recent years have overemphasized low-risk, high-impact problems (e.g., “Our supply-chain did not supply the new sprocket at the end of the first quarter. We can’t scale to handle the success of our launch. A meteor strikes our data center.”). They rarely accurately present the real risks, which include such mundane things as troublesome subcontractors, requirements creep, insufficient input from end users, and new technology.
“Inexperience with the chosen technology” appears as a major risk in every quantitative analysis I’ve ever seen, and this is one of the few areas in software development where there’s actually a decent amount of research. Underestimating training or spin-up costs, overestimating productivity gains, unexpected internal resistance to change: All of these contribute to the risk when switching programming languages, and are doubly risky when switching paradigms.
Further, even beyond the scope of the initial product phases, risk management demands that project managers consider the maintenance and evolution of the code over time. Of course, if one chooses an obscure programming language or other technology, there’s the possibility that it will wither. (I am not expecting any Modula-3 compilers to be available for the Google Glasses API, for instance.) But core software needs to be written with technologies that are not just viable, but are mainstream or close to mainstream.