My 15-year-old niece was told by her science teacher that the moon landings were faked. When I heard this, my blood boiled, much as if exposed to a vacuum, and in between inarticulate screams of rage, I sent various links to videos showing things well beyond the special effects capabilities of the Apollo era. (The most eye-catching to me are the lunar rover films, which show fine dust flying in a purely ballistic manner.) She’s old enough to evaluate the evidence herself, but I realized that I didn’t want to advise her to do so. She ought not to waste her time on such nonsense, just as you and I ought not waste our time on the conspiracy theories that now inevitably accompany every piece of actually momentous news.
Wasting time is easy. Discussions of programmer productivity generally focus on production: the idea that over a weekend a great programmer can code a marvel. Rarely do they focus on the negative: the prospect that most of us spend significant portions of our time running down dead ends. I’ve always been skeptical about the theory that there are programmers who sit down and type out perfect programs like Rachmaninoff at the piano, but I am regrettably aware how often I hang my head in humiliation, revert a dozen files, and make the simple change that I ought to have recognized from the first. I like to think I’m a decent programmer, but I’d hate to confront how much time I spend looking at the wrong thing.
Conspiracy theories have blossomed in the Internet era; with so much information, it’s easy to project a map of curiosities and possible interactions onto any subject. It’s similar with a complex program: You have not just your own modules, but also libraries and compilers and operating systems and browsers and deployment environments and so forth ad infinitum.
Programmers are not immune to holding irrational viewpoints (I give you the topic: “The use of LISP in industry”). What’s the conspiracy theory of programming? “I think it’s a compiler bug!” Yes, they exist, but every time you find such words forming in your throat, you’d be well advised to replace them with, “I’m looking in the wrong place.” I have a colleague with whom I share a silly tradition in which every time we talk through a problem, we dutifully acknowledge, “…or I may be doing something idiotic.”
In general, programmers cannot afford to hold on to opinions that no longer serve. Ours is a discipline that proves us wrong a hundred times per day. The great advantage of pervasive automated testing is that it more quickly shows us our mistakes, allowing faster correction. But even better, by making it clear how often we screw up, in the long run it teaches us to view being wrong as no great indictment of our overall aptitude.
Regrettably, “It’s not about whether you mistakes, but whether you correct them” is not as common a sentiment as the now-cliched boxing sentiment, “It’s not about whether you get knocked down, but whether you get up.”
In a recent article in “New Scientist” (“Why science is the source of all progress,” April 26, 2011), David Deutsch (perhaps familiar to “SD Times” readers for his work in quantum computation) argues that what makes scientific explanations so effective is that they are “hard to vary.” Once you say that seasons are caused by the tilt of the planet’s axis, you are locked in to expecting the northern and southern hemispheres to be opposite; if you explain, as did the ancient Greeks, that the seasons are caused by the behavior of gods like Demeter, Persephone and Hades, you can easily vary your explanation to suit whatever weather patterns occur.
Similarly, once you’ve posited a cabal talented and disciplined enough to fake a moon landing or similar epochal event, you can always simply invoke another layer of deceit or recursion into an inner circle whose talents and discipline and foresight are even more all-reaching and sinister.
That’s one of the reasons why conspiracy theories, if taken more seriously than a party game, are boring. Once you realize that “that’s what they want you to believe” is a recursive function that can be used to create any explanatory structure desired, actually going down the rabbit hole is wearisome. It’s a lot more interesting to find explanations that are hard to vary and thus might happily prove themselves insufficient.
Similarly, those aspects of programming that make our programs a little harder to vary—hundreds or thousands of test cases, type safety, and so forth—actually end up making our progress that much faster.
I’m happy to say that the next day my niece’s science teacher debunked the evidence he’d presented, and the whole incident had been a lesson in critical thinking. It’s certainly what I’d hoped for, but I’m still left wondering if “study the evidence and decide for yourself” is the best advice in a world where the Web will offer up limitless evidence. Perhaps we should rapidly consider the constraints that an argument imposes upon itself and, at least initially, favor the argument that is hardest to vary.
Larry O’Brien is a technology consultant, analyst and writer. Read his blog at www.knowing.net.