Recently, I got sucked into a discussion of sentience and whether or not such a condition could be achieved in silicon. I said I wasn’t sure it had been achieved yet in meat. But I do remain optimistic that it’s possible.

The problem with creating sentience in silicon—or meat—is that first we need to understand the goal. We’re talking about rational self-awareness. Calling it “artificial intelligence” is woefully inaccurate. We have plenty of evidence that machines can be programmed to behave intelligently, but that is not the same as self-awareness, and outside of the specific domain it has been programmed for, the machine is simply dumb.

I dislike the term “artificial intelligence.” It’s imprecise. It presumes that there is such a thing as “genuine intelligence.” Cynical observers of human behavior have justifiable reason to doubt this. Indeed, the single most compelling piece of evidence that there is intelligent life in the universe lies in the fact that they have not contacted anyone on this planet.

The problem with discussing “sentience” is the same problem that occurs with discussing “intelligence.” As soon as a person understands the concept, he also assumes it applies to himself. But the evidence of online participation—Facebook, website comment threads, various discussion forums—is that half of the human race is of below-average intelligence. The half that is above average may very well be too smart to participate in Facebook, website comment threads, or various discussion forums.

I’m fussy about language. It’s the only tool we have. In practice, this shows up as a persnickety obsession with precision. Mark Twain once said, “The difference between the right word and the almost-right word is the difference between lightning and a lightning bug.”

But more than that, I think language processing is where intelligence, and ultimately sentience, occurs. Certainly it is how both of these conditions are expressed. Language exists as a set of symbols that can be manipulated, much like Lego bricks, for the construction of astonishingly complex domains.

Even more pernicious, we can move so enthusiastically into those linguistic domains that we forget that we created them ourselves. We become captured by our language, trapped into perceiving the world a certain way because of the words we use to define and explain it. We assign explanations and then believe in the explanations. Muslims are terrorists. Gays are flamboyant. Blacks are dangerous. Women are manipulative. Asians are smart. Liberals are socialists. Rich people are greedy. We lock ourselves into our own prejudices. Instead of owning our language, it owns us.
#!
One of my teachers once taught me a great way to break out of the trap. Examine the distinctions to find out what you’re really saying. He used the word “generosity” as an example. “What’s the opposite of generosity? Mean, stingy, withholding. The opposite of that is open, giving and accessible. So there’s your access to what you’re distinguishing as generosity.”

Applying the same approach to “intelligence,” the opposite of intelligence is “stupidity.” We perceive stupidity as a condition that includes ignorance, flawed logic, unprovable assertions, inaccurate judgments, inappropriate choices, decisions made without regard to consequences. In brief, we see stupidity as a set of behaviors that fail to produce results. This can include dogmatic stubbornness, an emotional investment in beliefs that have no applicable referents in the physical universe, and an overall breakdown in ability to understand what’s really going on.

Looking at the opposite, we can now begin to understand intelligence. Intelligent behavior produces results because it is based on a holistic comprehension of interactions and consequences. Basic intelligence begins with the recognition of patterns. Analyzing and understanding patterns, then applying that knowledge in predictive ways to produce specific results represents a pragmatic relationship with circumstances. The more accurate a person’s understanding is, the more we see that person’s behavior as intelligent.

So it’s fair to say that intelligence is expressed as accuracy in the moment. Intelligence searches for accurate information and makes choices based on understanding the consequences of those choices. Examining assertions to see which ones are facts, measuring, testing, verifying, validating, checking to see that the tests are repeatable—that’s intelligence in action, the search for accuracy so that accurate choices can be made. This is why peer-review studies are valuable: They validate the evidence.

Coming back to the term “artificial intelligence,” it’s based on a presumption that we understand what intelligence is. I remain unconvinced. I prefer to use “intelligence engine” because I feel it’s a more accurate description of what we’re working toward in our machinery. We are seeing the rise of the intelligence engine everywhere in our lives. One of the most exciting examples is Google’s self-driving car. One of the most annoying is the damn speech-recognition menu that has replaced genuine customer service in companies who think “make it cheaper” is a viable alternative to “make it better.”

But the intelligence engines that are even now entering common usage are only as intelligent as we build them. They are not yet intelligent as we know true intelligence, because they’re not capable of independent thought. They’re nowhere near what we would call sentient.

Even at their very best, they are still just a complex set of algorithms making calculated choices from predetermined menus designed to address specifically recognizable circumstances. They can only recognize the patterns that they’ve been programmed to recognize. What they demonstrate is machine intelligence—call it manufactured instinct. A machine can make informed decisions, but only when it has been previously informed.
#!
True intelligence—sentience—has to be self-informing. It has to have the ability to learn. It needs to be able to recognize and adjust to circumstances not hardwired in. It would be able to model new patterns of information as it encountered them and consider the consequences of various actions. So that’s the next piece of the puzzle: the ability to create appropriate responses to unfamiliar situations.

When we see dolphins create new hunting patterns to catch fish in shallow waters, or chimps using sticks as tools to pry termites out of nests, when we see squirrels running obstacle courses to get to the bird feeder, and ants finding new routes to an unrinsed soda can, we are seeing how intelligence functions in practice: “I’m hungry, here’s food. I’ll try this and see if it gets me food.”

Indeed, this may be the evolutionary source of intelligence. Dr. Jack Cohen, a British biologist who informs the work of many science fiction writers, including Terry Pratchett and Anne McCaffrey (and myself), says that “Intelligence arises first in predators, because how smart do you have to be to sneak up on a blade of grass?” As predators become smarter, prey animals must develop stronger defenses, thus an evolutionary arms race is born. As intelligence develops, it becomes an evolutionary advantage.

The roots of language can be found in all the different calls that penguins and whales and cats and apes use to communicate with each other. At some point in the evolutionary history of our species, language and the resultant symbology achieved a critical threshold, an ability in the species to store and communicate experience. The result was that functioning intelligence hit a critical mass and our species became something else entirely: cognitive.

Some anthropologists believe that the tipping point occurred when primitive humans began burying their dead with weapons and pottery and trinkets. A burial suggests a ritual, and a ritual suggests a recognition of the personhood of the deceased. This certainly fits in with current theories that true intelligence—sentience—requires self-awareness. Not just awareness of one’s own self, but also awareness of the self-ness of others.

We see it in elephants and dolphins and dogs. We call it empathy. It’s a recognition of other people’s ability to hurt. (Autism is a kind of emotional blindness to the self-ness of others. While many autistic people have enormous cognitive ability—what we call “savants”—their ability to function is impaired by their inability to empathize with the feelings of others.)

Self-awareness as a philosophical construct is often summed up as cogito ergo sum: “I think, therefore I am.” More recent philosophers are unwilling to put Descartes before the horse. Many feel it’s more appropriate to say, “I think I think, therefore I think I am. I think.” That is, we have thoughts, but are we really thinking or are we just having thoughts? (Yes, I know. That one makes my brain hurt too.)

A key component of self-awareness is self-image. As a sentient being becomes aware of himself as an actor within his circumstances, he also creates an idealized image, a model of how a good person should behave. And this may be the heart of all neuroses. The human being worries, “Am I good person? Did I do right?” Dogs, cows, elephants and sea otters do not.

The point is that as we open the Pandora’s box of synthesizing intelligence, we are going to be dealing with some very interesting artifacts: unintended and unforeseen consequences, not the least of which is that as awareness grows, so does identity, and with identity comes the need to survive, and that triggers the development of emotions and all the attendant mishigoss that comes with emotions.

Marvin the Paranoid Android might be inevitable.

David Gerrold is the author of over 50 books, several hundred articles and columns, and over a dozen television episodes, including the famous “Star Trek” episode, “The Trouble with Tribbles.” He is also an authority on computer software and programming, and takes a broad view of the evolution of advanced technologies. Readers may remember Gerrold from the Computer Language Magazine forum on CompuServe, where he was a frequent and prolific contributor in the 1990s.