Functional Programming is the Next Big Thing in mainstream development. As I discussed in my previous column, functional programming approaches have slowly become more common in the mainstream, not because programmers have become more interested in Category Theory, but because functional approaches work well with the 21st century’s signature advancement in mainstream programming: unit-testing.
In other words, functional programming fits right into the mainstream of corporate development, where legacy codebases are large, programmer productivity must be high across teams with different experience levels, and tooling is important. This is the same place where dynamic languages such as Python and Ruby have been knocking on the door for several years without getting the reception they deserve.
Ruby has certainly crossed the chasm for Web development, and Python has become common in some domains (notably in the field of science), but neither seems to have made deep inroads into general corporate development. One of the major issues in both cases is that neither is native to either .NET’s CLR virtual machine or the Java Virtual Machine. While there are ports of the languages onto both the virtual machines, I’ve become used to hearing library compatibility complaints as an initial response to my questioning if a team has considered them. Whether such compatibility issues would go away with a few hours of configuration tweaking is rarely a conversation that people seem eager to have. F# (in the .NET world) and Scala (on the JVM) don’t have to jump through any hoops to use popular libraries. Advantage: native-to-VM languages.
A more significant advantage to functional languages over dynamic languages is IDE integration. In dynamic languages, the exact type of a variable is not fully determined until runtime, which is what makes them “dynamic.” The up-and-coming functional languages are statically typed and, generally, a good portion of their language design was spent assuring that the precise type can be inferred with no (or minimal) hints from the programmer. Such “type inference” means that these languages have less of the repetitive “finger typing,” in which one restates the type on both sides of an assignment (e.g., Foo myFoo = new Foo()). More importantly, type inference means that IntelliSense-style code completion in the IDE becomes much faster and more accurate.
It’s perhaps counter-intuitive that code completion should be so important to professional developers who, presumably, are familiar with the libraries and classes with which they program. But one of the things that distinguishes professional development is the broad surface area required by the professional. It’s not just one or two libraries of utility functions and a GUI toolkit, it’s a large and constantly changing subset of the entire platform API. Familiarity with the platform is, of course, necessary, but keeping the precise names and signatures of thousands of functions in mind is a burden even for those with steel-trap minds.
Admitting this, though, is apparently embarrassing, so instead of saying the reason they want static typing is fast IntelliSense, people who should know better continue to assert that statically typed languages are “safer, because the compiler can catch errors that otherwise wouldn’t show up until runtime.” While it’s true a statically typed language can detect that you’ve assigned a string to a double without running your code, no type system is so strict that it can substitute for a test suite, and if you have a test suite, type-assignment errors are discovered and precisely diagnosed with little difficulty. It is simply not the case that stricter type systems necessarily lead to higher real-world quality, although it would be hard to argue that a type system based on modern principles in any way hurts runtime quality.
But even if type-inferred functional languages combine the IDE integration and tooling of today’s mainstream languages with the terseness of dynamic languages, why is it “certain” that they will displace (or control the evolution of) today’s mainstream languages? The answer is concurrency. Regular readers know that this is a soapbox onto which I regularly climb: the shift towards manycore hardware will dominate the evolution of the software development field over the next decade. Today, regrettably, we do not have a reliable and broadly comprehensible model for developing concurrent systems. The shared-memory model of today’s mainstream languages is flat-out broken; Software Transactional Memory has been unable to prove itself; and the Actor model, while a conceptual improvement to shared memory, does not address the hard problems of composition and coordination.