The first episode of “Star Trek” aired at 8:30 p.m. on Thursday, Sept. 8, 1966. It was “The Man Trap,” written by George Clayton Johnson and directed by Marc Daniels. Early in the episode, it was established that the First Officer of the starship Enterprise was half-human, half Vulcan, and did not have human emotions.
My immediate (emotional?) reaction to Spock was that he was likely to be a very dull character. Without emotions, he might as well be a machine. He would not be interesting to the audience.
Well, what the hell did I know?
But the producers and writers in those first days of the series quickly realized that Spock did not live in an emotional vacuum. It was impossible for him not to have an emotional core to his being. Vulcan psychology had to be as different as Vulcan biology.
Unlike humans, Vulcans do not react emotionally. Vulcans are aware of their emotions as a reactive process, but they are not overwhelmed by their emotions. Emotions do not direct their behavior. So, a Vulcan’s choices are always carefully considered.
As the show developed, Spock became a much more interesting character. In fact, Spock became the single most compelling character in the entire series as the writers explored the tension between emotion and logic. Half-human, half-Vulcan, Spock represented the sentient integration of the visceral and the intellectual.
Emotions are a visceral reaction to a circumstance—literally. The nervous system evolved simultaneously with the gastro-intestinal system. This is why we feel fear as a coldness in the pit of the stomach. It’s why we feel affection as a hot flush in our skin. It’s why we feel rage as a fire in the chest. Our emotions evolved as a systemic response to stimuli.
#!
At the most primitive level, there are only two possible stimuli: Yipe and Goody.
In the beginning, goodies included mates and bananas, not too much else. On the other hand, there was no end to the various yipes in the forest. In addition to lions and tigers and bears (oh my!), there are also leopards, panthers, gorillas, baboons, wolves, coyotes, jackals, hyenas, snakes, scorpions, wasps, bees, fire ants, crocodiles, sharks, hippopotami, and various other things that go bump in the night. Not to mention volcanoes, lightning, fire, earthquakes, tsunamis, tornadoes, droughts, floods—all the larger forces of nature.
In response, we have evolved the ability to judge. Is this fruit ripe or rotten? Is that strange animal diner or dinner? Individuals incapable of making accurate judgments did not survive.
By the time we became human, we were developing a fairly sophisticated repertoire of responses: “This thing might be a yipe” and “This thing is a yipe” and “This thing might be a goody” and “This thing is a goody.” We experience these states as fear, anger, interest and enthusiasm. Other emotional states can also be seen as a relationship to yipe and goody. Sadness, for instance, is “That thing was a yipe. I am hurt, but I am still surviving. Somehow.”
What does any of this have to do with software development? Well, one of the largest segments of the software industry is game development. And one of the most difficult challenges in a game is creating actors who behave realistically. If programmers could effectively model human emotions, then actors in games could respond to a wide variety of situations on their own, without programmers having to predefine all the possibilities.
For instance, an actor finds itself in a situation. It assesses the circumstances. Does this set of conditions represent a yipe or a goody? If it’s a yipe, should I fight or flee? How big a yipe is it? Have I seen a yipe like this before? If I’ve never seen this thing before, if it’s totally unknown, I have no idea if it’s a yipe or a goody, how should I deal with it?
As complex as these questions are, this behavior can be modeled. (No, I’m not saying it will be easy.) Consider the possible emotional responses for a character as a vertical table. From the bottom up, the states would be resignation, sadness, fear, anger, curiosity and enthusiasm. Horizontally, we would look for the amount of energy or movement expressed in that emotive state. Sadness would start at weeping and expand to anguish or hysteria. Fear would go all the way from bashfulness to panic. Anger would start and annoyance and build up all the way to rage. And so on.
So an actor comes to a circumstance, gauges the possibilities of survival versus benefit, goes to the table for the logically appropriate reaction, and then reacts emotionally according to the circumstance.
#!
As it stands now, a gamer enters a level of the dungeon and everything mindlessly attacks (unless, like the leprechaun in Nethack, it has been programmed to run away). What would a game be like if, as the gamer attains higher levels of strength and ability, lesser actors ran in fear, leaving their treasures behind? Or, alternatively, asked to join the gamer’s party?
But this isn’t just about gaming. Beyond the immediate applications in gaming, researchers in machine intelligence must also consider the logical necessity of emotions. Imagine a robot encountering an unfamiliar situation, one in which it has no known referents. You would want that robot to have responses ranging from caution to curiosity. While you might not want that robot to be capable of rage, certainly you would want it to be capable of an informed retreat—fear.
Yipe and goody are about survival. A machine intelligence will need to maintain its own survival too. The immediate reaction to a situation, measured by one’s survival strategies, is the emotional one. It’s the mechanically logical response of survival strategies. And while sometimes that stimulus-reaction response is appropriate, in our modern technological world, sometimes it is not.
Acknowledging the reaction before taking action would be a logical gauge for a machine intelligence making an estimate of its specific position in a situation—before it makes a choice of action.
And this brings me back to Spock. Human emotions are reactive. Most of us act from within our emotions, using them to motivate and even justify our actions. Spock (as I understand the character) feels his emotions, is aware of his emotions, but is detached from them and rarely demonstrates them visibly. Instead, he sets his emotions aside and chooses behaviors that are logically appropriate to the situation.
When we finally do create a true intelligence engine, one that can demonstrate behaviors that shows up as sentience, it is possible—maybe even likely—that it will have a Spock-like intelligence. We can only hope. The alternative might be something that is, by our definition, insane and uncontrollable.
What do you think?
David Gerrold is the author of over 50 books, several hundred articles and columns, and over a dozen television episodes, including the famous “Star Trek” episode, “The Trouble with Tribbles.” He is also an authority on computer software and programming, and takes a broad view of the evolution of advanced technologies. Readers may remember Gerrold from the Computer Language Magazine forum on CompuServe, where he was a frequent and prolific contributor in the 1990s.