At the beginning, it was called “microcomputing.” Enthusiasts were delighted at the idea that computing could finally be freed from the “priesthood.” Magazines like “Creative Computing” and “Byte” and “Kilobaud” foresaw a future when programming would be a skill as ubiquitous as reading, writing and long division.

Right. Nbdy duz lng div anymor & we all rite lk ths now.

The future refuses to cooperate with our predictions and forecasts. But aside from that, the early days of microcomputing were very exciting, because you could watch the first stages of evolution at work. There wasn’t a lot of software at the beginning. You had to write your own. So you went to the magazines to learn, and later on, CompuServe.

A useful article might explain why Quick Sort was better than Bubble Sort, comparing sort times, explaining the algorithm, and finally providing a sample code listing that you could adapt to your own use. Another article might do the same for hash tables. A third would walk you through string handling.

A lot of those early tutorials were linked to simple games like Hammurabi and Tic-Tac-Toe, so after you finished entering the code (learning as you went), you could play the game—and as you learned, you could add your own modifications. Eventually, “Creative Computing” showed how to write Colossal Cave Adventure in BASIC and store it all on two floppy disks, and that was the beginning of the text adventure explosion.

In those days, every manufacturer had their own implementation of BASIC, so listings often had to be translated. That meant learning familiarity with a lot of different dialects. When Turbo Pascal arrived, it unified a lot of software development, and because it compiled directly to a COM file, it was faster than interpreted BASIC, a very important advantage when you’re running at only 2MHz.

BASIC was notorious for resulting in spaghetti code. Turbo Pascal made it possible to write structured code. Although Pascal was originally intended as a learning language, Turbo Pascal was both an editor and a compiler. It was a powerful breakthrough for both hobbyists and professional programmers.

Where BASIC limited you to meaningless variable names like A1 or X10$, Turbo Pascal let you define variable names that represented what you were using them for, like EventDate and LastNameStr and TotalExpense. Turbo Pascal had you creating your own procedures and functions and naming them appropriately. No more GOSUB or DEF. This made code easier to read. It was practically self-commenting. And that made it a lot easier to debug.

If
    (DilithiumCrystalUsage = DilithiumCrystalLimit)
then
    ScottyReport (“Cap’n, I canna give yeh nae mahr pahr.”)
else
    Inc(Power, 1);

What I eventually loved the most about Turbo Pascal was that the language made it possible to create libraries of reusable functions: You weren’t just writing program-specific code, you were extending the language.

But if writing understandable code was now significantly easier, debugging it was not because we were now writing much larger and much more sophisticated programs.

At the beginning, if you wanted to track down a bug, you had to walk through your code, almost step by step, monitoring your variables to discover where they went off the rails. In the first iterations of Turbo Pascal, a programmer would insert breakpoints that would pause the program and report the state of suspect variables. Eventually, if you were lazy enough (like I was), you would write some kind of a Breakpoint procedure that paused the operation of the program at key points to report the values of suspect variables. You could insert a single line of code into your listing at any point you needed to see the state of variables. That simple trick sped things up a lot.

Subsequent iterations of Turbo Pascal included powerful debugging tools that let you step through your code as well, watching suspect variables as you went. And that was wonderful too.

But eventually, I started thinking: Could I save myself time and grief by writing black box procedures, functions and units? What if every self-contained routine had to perform its own internal validation? What if it could call a validation routine, and if that routine returned an error, the procedure or function would stop the program to report it?

If
   not ValidString(InputStr)
then
   ShowError(“This Proc stopped. ”, InputStr, “ not valid.”)

And in one troublesome instance, I even installed an additional validation check before exiting the function to make sure the output also fell within expected parameters.

The more I did this with my growing libraries, the more rugged my code became. Blind-spot errors would report themselves almost immediately, and the ones that popped up weeks or months later were letting me know exactly where to start looking. The rest of the debugging process was now simplified as well, because now that I knew what the error was, I could backtrack to the piece of code that had allowed it. Debugging that could have taken a day or longer was now fixable in minutes (most of the time, anyway).

Having this consistency across my library of tools also made the development of new programs a lot easier, because now I could plug in existing units, knowing I had already hammered them as hard as I could. Eventually, I also added this:

If
   DebugFlag and not ValidString(InputStr)
Then…

The DebugFlag allowed me to turn off all debugging with a single flag if I ever reached the point where I believed the program to be finished. (No, I never did reach that point.) What I did discover, however, was that while I had been worrying about clock cycles and code size, the power of the hardware had evolved to a point where any performance penalty that the validation checks might have caused was so insignificant that I couldn’t see any difference.

That was a long time ago. Obviously, the development tools of today have become far more powerful. There are whole classes of bugs that are headed toward extinction. I haven’t seen many complaints about reclaiming memory or cleaning up garbage in a while.

But if all those little technical glitches are less common, what does remain are all the other errors that programmers can make: flawed algorithms, assumptions about the data, or even failure to understand the needs of the larger process.

Our tools have evolved and matured to make it very easy to model and manipulate data in a marvelous variety of ways, but are we losing accuracy along the way? Are the results valid just because they look good? Have our powerful and sophisticated tools created a new kind of sloppiness?

Back in the nineties, there was a satellite mission that failed. There was nothing wrong with the code, but the programmer assumed that the length of a day is precisely 24 hours—it isn’t. It’s just half a smidge longer. But the controlling software was based on an inaccurate assumption, and a very expensive satellite did not end up where it was supposed to.

We have gained efficiency in production, yes, but have we made an equal advance in the integrity of the resultant code? What other assumptions are still buried in libraries? Have our sophisticated production tools created a shortsighted belief among programmers—or even whole code foundries—that the ruggedness of the resultant software is also proof of its accuracy?

This is the real question (and I think it’s one that does not get discussed enough): Has our ability to debug code caught up with our ability to write it? Are we creating algorithms that are accurate to the situations we are modeling? Is our software as valid as we think?

I’d like to hear what the programming community thinks. What do you think?

David Gerrold is the author of over 50 books, several hundred articles and columns, and over a dozen television episodes, including the famous “Star Trek” episode, “The Trouble with Tribbles.” He is also an authority on computer software and programming, and takes a broad view of the evolution of advanced technologies. Readers may remember Gerrold from the Computer Language Magazine forum on CompuServe, where he was a frequent and prolific contributor in the 1990s.