Philosophy and science both come to a screeching halt when confronted with two essential questions:

One is Why does existence exist? Why is there is? Why isn’t there nothingness?

The other is What is consciousness? Why am I me? Why do I experience myself as a self?

The why-ness of existence is a question that human beings will likely never have an answer for. We would have to leave this universe to look at it from the outside, and that would only kick the question upstairs. Why is there an outside? Why does anything exist at all?

The question of consciousness (let’s call it self-awareness) may be even more puzzling, because it should have a knowable answer. But to get there, we have to apply consciousness to the question of its own existence. Why am I here? Why am I able to know that I exist?
 
Does your brain hurt yet? Mine does.

If life is possible in this vast universe, it’s inevitable somewhere. This is that somewhere. And if self-awareness is possible for living creatures, then it is also inevitable somewhere. Again, this is that somewhere.

The question becomes immediately relevant as soon as we start talking about machine intelligence. Some people believe that self-awareness in a machine is impossible. I’m not one of those people. I look at human beings and I see self-awareness in machines made of meat. Imprecise, sloppy, often irrational, self-aware machines, but still occasionally capable of genuine consciousness and sometimes even genuine thought.

So if our existence is evidence that self-awareness is possible in meat, it shouldn’t be too hard to consider the possibility of simulating it or even creating it in silicon or whatever might replace silicon in the future. The problem is that as self-aware as we are about ourselves, we still don’t know why we are self-aware. (Don’t think about this one too long. It’s a variation of Why does existence exist? only more self-centered. Why do I exist? Why not someone else? It’ll make your brain hurt even more.)

Many years ago, Marvin Minsky wrote “The Society of Mind,” one of the more interesting analyses of how thought processes are structured. In that book, he postulated that the mind exists as a series of well-structured processes and sub-processes that are called into service as needed. Anyone who’s written code will instantly recognize the metaphor: The mind works like an event-driven computer program.

Example: When you’re learning to type, you have to think about each keystroke. But after you’ve learned, you don’t think about individual keystrokes anymore; you’ve trained yourself. In the process of training, you’ve created appropriate subroutines that translate words into finger movements. After training, these are no longer conscious actions, but servants of a higher impulse. They’re not just muscle memory, they’re subroutines of the mind.

The construction of the conscious mind begins as an infant painstakingly learns the basic controls of his body, mouth, fingers and feet, then gradually learns to put them to work to create a growing repertoire of ever more sophisticated processes: holding, walking, talking. We learn reading and writing one letter at a time, one word at a time, until one day we’re reading and writing for the purpose of a different level of learning: math and history and science. Those disciplines open up an understanding of the larger cultural context. Somewhere in there, we start learning the tools of personal functionality within the social context.

If Minsky is correct (and I think he is), then self-awareness in a machine could occur the same way it does in meat—as the highest directing process that calls up various subroutines as necessary. Programmers will immediate recognize the pyramidal structure. Just as your code calls up the smaller routines from its libraries, so does the mind.

What we call the mind exists at the highest levels of multiple pyramids of functions and processes and subroutines, but actual consciousness occurs whenever carefully considered choices must be made, where awareness is truly needed. But that still doesn’t answer the question of consciousness, does it?

Let’s consider this possibility: Consciousness is a function of identity. Self-awareness exists when there is a self to be aware of. I’m going to go further out on this philosophical limb and say that the self is constructed out of memories. It’s a function of the mind’s ability to store experiences and refer back to them. It’s time binding.

The purpose of a mind is to make judgments. Is this a yipe or a goody? Will it kill me or will it feed me? The mind creates itself first as a judgment machine. It stores these judgments as memories, because they function as a personal survival guide. As the mind begins to recognize the important value of those stored past experiences, it develops the need to protect those memories. It develops a need to survive.

Those memories are the building blocks of identity. The more memories a mind collects, the more individualized that mind becomes. The more individualized it becomes, the more it becomes an identity. As an identity, it is specialized to its circumstances, it becomes unique. When that identity becomes aware of itself, it also begins to develop survival strategies. The imperative of a survival strategy is to protect the identity. (And I could write a whole book about the unintended consequences of that. War is only one of the immediately obvious consequences.)

As the identity becomes skilled at determining yipes and goodies in its environment, it develops reactions based on those judgments. We experience those reactions as emotions. Complex emotions occur as a logical development of identity. They are a natural part of the process of self-awareness.

And where does true sentience begin? Here’s my theory. Sentience requires not only self-awareness, but awareness of the selfness of others. Sentience requires empathy. Empathy is the foundation of a moral code, the basis of integrity, the foundation of wisdom. Wisdom, as it has been expressed and acknowledged in many cultures, shows up as the ability to live in harmony with the world.

Okay, yes: I’ve gone from the theoretical to the philosophical, but these are exactly the questions that must be considered when we talk about machine intelligence, consciousness, and ultimately robots that can function independently of immediate human control.

Asimov’s Laws of Robotics are a philosophical goal, but to put those laws into action, robots are going to require a high degree of self-awareness, judgment and ultimately empathy. If robots are ever to become intelligent machines that we can trust, they will have to have a genuine recognition of selfness—their own as well as ours.

What do you think?

David Gerrold is the author of over 50 books, several hundred articles and columns, and over a dozen television episodes, including the famous “Star Trek” episode, “The Trouble with Tribbles.” He is also an authority on computer software and programming, and takes a broad view of the evolution of advanced technologies. Readers may remember Gerrold from the Computer Language Magazine forum on CompuServe, where he was a frequent and prolific contributor in the 1990s.