“Maybe instead of talking about 100x programmers, we should talk about 100x programming.” This was a recent Twitter provocation from Reginald Braithwaite (@RaganWald), author of “JavaScript Allongé.”
What would it take to achieve a two-order-of-magnitude change in software development pace? The boldness of the target dwarfs the common quibbles about syntax, semantics, and maybe even the dynamic vs. static typing debate.
As a guide to what such a future might look like, consider the past. In 1990, if you were a developer, you fell into one of three fairly neat categories: a mainframe programmer working in COBOL; a minicomputer programmer most likely using C (or, daringly, C++); or a PC programmer, in which case your programming arsenal certainly included a “fourth-generation language,” a general-purpose programming language such as Pascal, C or Basic (not Visual yet), and a toolkit for creating character mode windowed interfaces.
Even 25 years ago, those working with COBOL were likely working with legacy systems. C and C++ have advanced in the past 25 years, but not in any kind of order of magnitude, much less two. The most interesting comparison, to me, would be the business developer, whose focus is on delivering value to nearby stakeholders.
Today these developers work on websites, mobile applications and internal systems using technologies that were not invented in 1990. Similarly, many of today’s developers are unfamiliar with the 4GLs that were so common in the DOS era.
4GLs were database languages (or perhaps more accurately, database development environments). Developers could attach functions to specific tables, rows or columns that would be called prior to (or after) creating, updating or deleting data. The programming languages typically were interpreted and had Algol-like syntax. The emphasis was on productivity, and productivity for creating forms and data-entry programs was incredible.
Of course, there are aspects of today’s applications—notably network connectivity, scalability and graphics—that go vastly beyond the single-computer department-level tasks that were the bread and butter of 4GLs. But if you think about domain value—adding a new tax rule, changing a workflow, generating a new report—these things happened just as quickly in 1990 as they do today. Or perhaps I should say just as slowly.
As Fred Brooks pointed out in his essay “No Silver Bullet,” the complexity of software is essential, not accidental. We will always get things wrong and make invalid assumptions, and only learn of these problems at the last minute when the user says “No, that’s not what I meant at all.” (Or, my favorite, when a user says “I suppose that could happen,” which a developer must hear as “You have to spend time dealing with this, because it happens.”)
Similarly, Joel Spolsky established “The Law of Leaky Abstractions.” And a great deal of any developer’s time is spent with functions that “should work” but which, due to leaky abstractions, require rework and restructuring.
To increase programming 100x, either these truths will have to change (which seems unlikely given that they have not changed in any manner approaching an order of magnitude in the past 25 years), or our programming approaches will have to change.
It was about 25 years ago that objects promised to deliver huge productivity boons through code reuse. Object orientation was sold almost entirely on the concept that development was going to switch to the task of working with reusable components, embodying everything from interface widgets (the only area where, arguably, reusable components have had some success) to business logic.
Didn’t happen.
A few years ago, I thought I had seen a glimpse of 100x programming. Intentional Software’s domain workbench concept blew me away with “projectional editing” that allowed programming a GUI with a screen builder, a state machine with a diagram, or a circuit simulator with a circuit diagram. Unfortunately, while the company still exists, they have pivoted, apparently focusing more on corporate collaboration. I remain convinced that different aspects of programming should be done with different tools, different environments and different paradigms, but even that is not, I think, enough for 100x.
The only vague idea I have to achieve a truly futuristic programming ability is the concept—troublesome as it is—that the programmer should focus on specifications, not processes, and that the computer should be able to offer several variations on the actual computation.
Specifications cannot just be example-based unit-tests. The challenge is explained well by Scott Wlaschin in his recent blog post “An Introduction to property-based testing.”
While property-based test specifications are considerably harder to write than example-based unit tests, they are more robust. They are the only type of input that would work to achieve the vision of a “Do What I Mean” compiler: “Make an administrative screen for that data,” “Align the buttons.” Crucially, this vision would often not involve a complete specification; the computer would just offer alternatives. The role of the developer would switch to being as much a gatekeeper and editor as a creator.
As I’ve tried to work through this idea, those troublesome COBOL programmers come to mind. “Just write the specifications” is practically the original goal of COBOL. And that was invented not 25 years ago, but closer to 50!
Maybe we should be happy that we’ve at least figured out how to align CSS boxes.