Alan Zeichick’s recent column reflecting on Artificial Intelligence and his time as editor of AI Expert magazine brought back warm memories for me. I first met Alan at a conference on neural networks in San Diego, and we hit it off immediately, talking all things machine learning as we searched for the best steakhouse in the Gaslamp Quarter.

In the first five decades of computing, the idea of an “electronic brain” was not only considered inevitable, but the raison d’etre of computers. Vision, speech, ruthless logic… these things were considered foregone conclusions. We were left only to speculate about whether computers could actually have emotions (“My mind is going Dave. I can feel it…”) or could only simulate them (“An empathy test? Capillary dilation of the so-called blush response?”).

Today, it turns out that computers are very handy for facilitating communication between humans. What percentage of time spent in front of a screen is e-mail, Web surfing or updating Facebook? Oh, and video games. This is all well and good, but not what was expected (to be fair, there was that Star Trek episode where the ship’s log was doctored to implicate Kirk in the murder of some dude. It’s almost like Climategate).

But as far as understanding, much less anticipating, what we want from them, computers stink. Voice recognition has become marginally successful, but the algorithms used are based on statistics, not interpretation.

One of the characteristics of intelligence that seems essential is that there be an inflection point where the growth of intelligence becomes self-sustaining. At some point, an intelligent being takes over its own education and begins generating its own conclusions, including intermediate conclusions, discovering holes in its knowledge, seeking out answers, and so forth. In classic sci-fi, this was sometimes portrayed as a razor-sharp inflection point: “Is there a God?” “There is now.”

Other times, it was a slower, more mysterious change that takes place over time, as it does with our only natural example: the growth of children. However it plays out, such an inflection is needed to produce something worthy of the name intelligence (whether a human-like intelligence is the only type of intelligence “worthy of the name” is another issue).

The need for an inflection point is not a new concept, but it turns out to be a sticking point. After approaches based on neural nets were mistakenly dismissed in the mid-1960s, researchers primarily pursued a “top-down” approach to AI—how sensory data was transformed into mental models was put aside while researchers concentrated on how models could be manipulated and described by machines.

This work was impressive, but the amount of problem-specific scaffolding was so large that it could be difficult to tell what was science and what was just a slick demo. Projecting a personality into what we now call a chatbot says more about the user than it does about the algorithms involved, and while having a computer stacking blocks recalls child’s play, they aren’t imagining castles while they’re doing it.

In the mid-1980s, interest in “bottom-up” AI exploded after the discovery that multilayer neural networks could overcome the limitations described in the 1960s. Biologically inspired programming (including neural nets, genetic algorithms and swarm algorithms) showed that large numbers of simple processes could perform surprisingly complex calculations. The mantra of “emergence” meant that Real Soon Now these processes would start creating generalizations and reach the inflection point.

Yet both approaches to AI founder on what David Chalmers termed “the hard problem of consciousness.” How do subjective phenomena arise? No one thinks that even a large network of connected thermometers “feels” hot, so how is it that pumping temperature values into a bunch of sigmoid transfer functions or expert systems can create a feeling?

Daniel Dennett has masterfully argued that consciousness is a great deal more slippery than we generally intuit, but has he successfully cut the Gordian Knot of consciousness? Slippery doesn’t necessarily mean illusory (but, of course, intuition is an unreliable oracle when it comes to the physical world).

The two most ambitious AI projects that I know of are Douglas Lenat’s Cyc project and Jeff Hawkins’ HTM theory. The Cyc project is essentially a grand knowledge base, an attempt to build from the ground up what the top-down people thought they could achieve. Two decades ago, Lenat claimed that the inflection point would come in less than five years; that horizon has been retreating ever since.

Hawkins’ 2004 book “On Intelligence” lays out a very concrete and testable model for neural networks based on the structure of the neocortex. In the book, he predicts that “Within 10 years [from 2004], I hope, intelligent machines will be one of the hottest areas of technology and science.” His company, Numenta, has released a vision toolkit and a research framework, but there’s no sign of an inflection point breakthrough.

However, in an article that contains both numerous sci-fi references and such pessimism about what’s possible, I have to end with a paraphrase of Arthur C. Clarke’s First Law: “When a distinguished but elderly [columnist]…states that something is impossible, he is very probably wrong.”

Larry O’Brien is a technology consultant, analyst and writer. Read his blog at www.knowing.net.