Back in the earliest days of “micro-computing”—even before IBM and MS-DOS arrived on the scene—there were approximately 100,000 personal computers in the world. Those who were using their machines for business or writing were most likely using CP/M. And many Apple users installed add-on boards with a Z80 chip so they could run CP/M programs as well.

CP/M was a pretty rugged file handling system for its time. It worked around the limitations of floppy disks, and later on it adapted well to 10-megabyte hard drives. When IBM needed an operating system for its 16-bit PC, it considered licensing CP/M, but talks broke down and IBM went to Microsoft for MS-DOS. MS-DOS used a lot of the same concepts and mechanisms of CP/M, so the most popular programs of the day—WordStar, dBase II, Turbo Pascal—were easily ported over. The business-level “micro-computer” users quickly graduated from their 8-bit machines to the increased power of IBM-PCs or IBM-compatible PCs, and that was the end of CP/M.

In those days, an operating system wasn’t an operating system as much as it was a file-handling system. You could copy, delete and move files. You could get a directory listing. You could write batch files to run scripts. And you could run programs—one at a time. You would type the program name and perhaps a few parameters like /s or –r, and the program would run.

That was your interface. And a lot of software behaved the same way. Your monitor would display a continually scrolling list of commands and responses. Until you could peek and poke actual screen locations, you also had to write a command-line interpreter as the front end of your software. And because every monitor had its own specific codes for peeking and poking, that was a lot easier than providing configuration files for every piece of hardware on the market.

The best of the early command-line interpreters simulated spoken English, and for a while that seemed to be the direction that the computer interface was headed. A command-line interpreter would process whatever you typed in and produce an appropriate result. If what you typed was incomprehensible to the interpreter, the program would respond with a “Huh?” message, or a less polite “INPUT ERROR.”

One of the best command-line interpreters was built into the Colossal Cave Adventure game, and later on the Zork series of text adventures and the Infocom games that expanded on that. Again, you typed and the computer scrolled responses up the screen.

On the database side, dBase II could be run from the command line, or you could write complex scripts. When it became possible to write directly to specific screen locations, you could process data in forms that would update as you worked. This static interface began to shift how we thought about accessing our data. It was one of the first steps away from the command-line interpreter.

But the real powerhouse software of the moment was WordStar, one of the most sophisticated programs available for CP/M and later DOS. It set the standard for word-processing software for many years. The top third of the screen was a menu of commands, the bottom two-thirds were filled with the text you were working on. Once you had learned the commands so well that they were second nature, you could eliminate the menus and go to a full-screen mode.

Older users will remember the control-key diamond of S/E/D/X for moving the cursor. If you hit the top or the bottom of the screen, your text would scroll up or down. Other control keys gave you access to a whole repertoire of necessary functions. Ctrl-K-S would save a file. Ctrl-P-B would shift to boldface. Remember those? If you were a touch typist, once you had the muscle memory for WordStar’s command structure, you could hit 120 words per minute on the straightaway.

Many science fiction writers were quick to abandon typewriters for WordStar and other word processing programs. Most notable were Larry Niven and Jerry Pournelle. But some users didn’t like the control key menus of WordStar and preferred WordPerfect or Electric Pencil.

#!
With WordStar (and other word-processing software), the command line disappeared and users began to experience a much more hands-on relationship with their data. You typed and the words appeared. You moved the cursor around, you selected blocks of text to move, copy, cut, paste or highlight. Your keyboard became a controller for the action on the screen.

As programmers gained a greater understanding of the available hardware, the software evolved. The more clock cycles that became available, the more that software grew new features. Soon, every program had its own specific interface, and for a while there were a lot of public disagreements about how software should look and feel. This argument reached its most extreme when a spreadsheet called Lotus 1-2-3 hit the market.

Lotus 1-2-3 was a very powerful spreadsheet for the time, and it was considered a “killer app.” That is, the application was so useful that you would choose a computer based on its ability to run that app. Just as VisiCalc before it had sold a lot of Apples, Lotus 1-2-3 sold a lot of IBM PCs and compatible boxes.

Lotus 1-2-3 had a menu across the top. If you had a mouse connected to your machine—yes, we had mice in the DOS era—you could click on a menu item. Or you could press the control key and the underlined letter of the specific menu item to activate it. Or you could use your keypad to move a highlight to whichever menu item you wanted and then hit Enter to trigger that function. So you had three different ways to manage the program, whichever one suited your own sense of how a program should behave.

This interface was intuitive. The menu gave you instant access to all of the program’s functionality, teaching you how to use it as you went. Beginners or casual users wouldn’t have to go to a manual to find out how to do specific functions, they could work their way through the menus. Experienced users who had mastered the various control-key combinations could blaze through their work. So the program was accessible to all users, regardless of skill level.

This was such a marvelous way of accessing program functionality that many software publishers immediately imitated it. And Lotus immediately sued over “look-and-feel” issues, claiming that it was a copyright violation to copy its method of operation and its file format. It won its case against Mosaic, but when it went up against Borland’s Quattro Pro, the First Circuit Court ruled that it is not a copyright violation to have a compatible menu structure, nor is it a violation for a program to be file-compatible with other software.

#!
This was good news for the software industry. Can you imagine what a mess we’d be in today if every program had to have its own specific command structure and its own file format? Arghh!!  Shoot me now.

The big mistake at Lotus was that it invested too much its efforts in lawyers instead of programmers, so it was pretty much unprepared when Windows 3.0 arrived. Microsoft Excel ate its lunch, its after-school snack, and most of its dinner too. The folks at Lotus forgot that real the goal is usability, and 1-2-3 is now just a footnote in history.

There’s another lesson to be learned here too. Getting litigious will piss off the rest of the industry, it will destroy the possibility of creating useful partnerships, and ultimately it will alienate the all-important user-base.

But the menu revolution didn’t happen overnight. In those days, a lot of old-school programmers were resistant to mice, menus or even keyboard shortcuts. They had grown up with the command line, and to them that was the right way to run a computer. Living inside the bubble, they didn’t realize that ordinary people were simply looking for an intuitive relationship with the monitor screen.

One programmer of my acquaintance had written the single most powerful money-tracking program available. He dominated the market for at least two years. His software was both sophisticated and rugged. You could use it to balance your checkbook and track all of your home finances, but it was also powerful enough that you could also use it to manage a small business. He regularly added new features, and he monitored the competition so he could stay ahead of it. He released upgrades on a regular basis and was on track to owning his market niche.

But he had a blind spot. When I demonstrated to him how versatile a mouse-and-menu system could be—I had written one for a (now long forgotten) VGA color-shifting utility called Prism—he pooh-poohed it. He said that menu systems were for beginners. (Well yes, but a lot of users are beginners. Go out and talk to a few.) He said that his program had a command-line box at the bottom of the screen, and that was why it was far more powerful and versatile than any menu system could be. All you had to do was remember all the different commands and all the different modifiers. It was just like typing a plain English sentence. “SHOW ALL INCOME FOR DECEMBER.”

There was no question that his interface worked well—if you knew how to work it. Unfortunately, it turned out that users didn’t want to memorize all the different commands and modifiers. They didn’t want to type a whole sentence. They wanted the instant response that they got from a couple of shortcut keys or two clicks on a menu. By the time Windows 3.0 arrived, the market opportunity had been swallowed up by Quicken and Microsoft Money. Even though they were not as powerful, they had a much more accessible interface.

This is the point: For the user, the interface is the software. The user doesn’t interact with the actual engine underneath, he only knows the front end as a set of buttons, knobs, dials and switches that produce specific responses. He trusts that the underlying engine works.

When the industry moved onto the Mac and later onto Windows, the graphic user interface became inevitable and all software started to look alike: a menu bar at the top with File, Edit, Tools, View and Help options. Apple said it wanted all Mac software to have the same interface so that users would have a consistent experience across the system. If you knew how to use one program, you’d know how to use them all. That idea was both idealistic and draconian—and ultimately stifling. It left no room for experimentation—or evolution.

#!
It was the PC world that became the fertile arena for experimentation. Some of the efforts were amazing. Some programs—like Kai’s Power Tools and Bryce—were fun; others were not. Some were easy to use, others had a steep learning curve. What became evident then, and remains true today, is that every software tool has its own specific requirements. What data is requested? What options need to be set for diddling? And how can the result best be presented?

Users tend to see the interface as the actual access, not a control panel to direct the underlying machinery to produce a desired response. So we dismiss a clumsy interface as a clumsy program. We assume a well-designed interface represents powerful and efficient code. In truth, the interface only represents the programmer’s best concept of how to access the mechanisms of the software. Like my friend above, some programmers are much better at writing efficient code than they are at creating an intuitive interface. Programmers aren’t known for humility, and there aren’t many who will admit that their design skills might need work, but the result of that short-sightedness is that some software design ends up so clumsy (and ugly) that it’s self-defeating.

And sometimes the interface is so cluttered with confusing functions that the result is intimidating. It’s not unusual for very expensive software to present an enormous repertoire of unexplained functions. (I’m supposed to be smart enough to know what “unweighted cross-phase defibrillation” means. And no, I’m not going to call Wesley Crusher to have him explain it to me. If I don’t believe a 15-year-old super-genius can save a starship, I’m not going to believe he can understand this either.)

While big companies like Microsoft can invest millions of dollars in usability testing (giving us the ribbon instead of nested drop-downs), a guy who’s working out of his garage has to rely on his own gut instincts, and those instincts might not always be in tune with the way the user comes to the software.

The interface is the most important part of any piece of software, and too often, it’s also one of the most neglected issues. Just browse through some of the programs offered on shareware sites. It can be pretty depressing. Even the best software is useless if the user can’t figure out how to get the desired result. If using the program is a tiresome or annoying exercise, he won’t come back.

Much of interface design is common sense. Look at the software you have running now. Why do you favor it? What’s your experience? That’s the question that needs to be asked when designing any interface. If I’m the user and I’m coming to this cold, will the program point me to where I want to go? Can I get there in three clicks or less? Interface design is as much an art as it is a science. A little careful thought can make a big difference in the marketplace.

What’s your experience of interface design? What would you recommend that designers do—or not do?

David Gerrold is the author of over 50 books, several hundred articles and columns, and over a dozen television episodes, including the famous “Star Trek” episode, “The Trouble with Tribbles.” He is also an authority on computer software and programming, and takes a broad view of the evolution of advanced technologies. Readers may remember Gerrold from the Computer Language Magazine forum on CompuServe, where he was a frequent and prolific contributor in the 1990s.