“It ain’t what you don’t know that gets you into trouble, it is what you know for sure just ain’t so,” said Mark Twain. Actually, some dude named Josh Billings said it, but continuing to attribute it to Mark Twain is nicely ironic.

When it comes to programming, our assumptions give us blind spots. I talked about this in relation to code in a recent column, but it can be even worse when we work for years with an outdated certainty.

The specifics of programming change with every release of a library, and the challenge of software development as attempting to rapidly deliver value against the needs of our users is unchanging. But, in between deprecated APIs and “The Mythical Man-Month” are the beliefs that shape not only how we approach problems, but which problems we commit ourselves to tackling.

The quintessential example may well be how older developers put programming languages in two camps (interpreted vs. compiled) and then make a bunch of assumptions about the characteristics of both the language and any programs written in that language.

JavaScript is a quintessentially dynamic language (a colleague recently told me of a codebase that used regular expressions on JavaScript function signatures in order to do something, although I’m not sure what, because I blacked out in horror), but in many host environments it’s actually compiled. The “compilation” of programs for environments of the Java Virtual Machine and the .NET Common Language Runtime (and now with Bitcode for iOS) is generally to an intermediate form that is turned into native code later.

Shaders, such as those at shadertoy.com, demonstrate how little value the old model delivers (yes, shaders are compiled, but can be updated more or less instantly within the browser).

Probably the only useful distinction is whether there is a compilation step distinctly perceivable by the developer. Today, this is not so much about the structure of the program as it is about the type system. You probably have a strong belief that the type system of a programming language has a strong effect on its productivity. Did you learn to program in college? You likely feel that more formal type systems keep the developer on the straight and narrow, catching certain errors but, more importantly, keeping their code focused and clear. Did you learn to program on your own or in a code camp? Probably you feel that the speed with which you can modify a function (perhaps even while the program continues to run) keeps the developer immersed and that the kinds of errors that type systems catch are not the things that slow down development.

If you are the type of developer who likes to debate such things, you know that there is ample support for your position, and anyone who holds the opposite opinion is willfully obtuse. In reality, not so much.

The tl;dr for that paper (which, itself, is a tl;dr of other papers) is that, while there may be substantive effects of different type system approaches, these effects seem to be small, and there is not a consistent winner.

Another example was brought to my attention by my nephew, who recently began programming in C in college. He informed me that “C is portable assembly language.”

Not so much. The claim was not even particularly true by the time I was his age, but the defining characteristic of machines in those days was their openness and (relative) simplicity. The reason to program in assembly language in those days was for flexibility and lack of overhead. C’s straightforward memory model and type-flexibility (**void anyone?) may not have opened up 100% of the power of the system at hand, but the mental model of the machine was close.

While it’s true that a C compiler is among the first pieces of software for a new chip, today even general-purpose chips are distinguished by their memory and cache structures and by their support for parallelism and concurrency. The reason to program in assembly language today is to take advantage of those things: to match the size of your data structures and code with the hardware; to pack your data so that it can be operated on in parallel; and to use advanced primitive operations relating to fetches and ordering of operations. You might still use C as scaffolding; data structure definitions are easier to read in C than in assembly language. But the mental model that you have to carry in your head to exploit a modern chip is one that’s vastly more complex and specialized than that presented by C.

If you were to poll developers of my age, I am confident that most would endorse the belief that a strong intuition of the underlying hardware is vital to being a good programmer. But how in the world can that be true in modern enterprise development? Your website runs in a container on some set of boxes in some datacenter that’s abstracted away by your cloud provider. Your app runs on phones and tablets whose chipsets and memory are abstracted away by Google or Apple. Your analysis and reporting programs work on data coming from any number of disparate streams: some local, many on the network, some synchronous, some asynchronous.

Speaking of college and “kids today,” perhaps the most dangerous belief held onto by older programmers is that they (we) are still open-minded. Even if we cultivate our humility and curiosity, our vision is narrower than it used to be. We inevitably squeeze new data and experiences into our established molds. By doing so, we not only retread questions people aren’t asking anymore (such as those discussed above), but, even more devastatingly, we devalue the questions they are asking.

At the risk of attempting to update the great Josh Billings, it ain’t the questions you answer wrong that show your ignorance, it’s the questions you don’t even think to ask.