A few months ago, I pondered the growing gap between the way we solve problems in our day-to-day programming and the way problems are being solved by new techniques involving Big Data and statistical machine-learning techniques. The ImageNet Large Scale Visual Recognition Challenge has seen a leap in performance with the widespread adoption of convolutional neural nets, to the point where “It is clear that humans will soon only be able to outperform state-of-the-art image classification models by use of significant effort, expertise, and time,” according to Andrej Karpathy when discussing his own results of the test.
Is it reasonable to think or suggest that artificial neural nets are, in any way, conscious?
Integrated Information Theory (IIT), developed by the neuroscientist and psychiatrist Giulio Tononi and expanded by Adam Barrett, suggests that it is not absurd. According to this theory, the spark of consciousness is the integration of information in a system with feedback loops. (Roughly: the theory has complexities relating to systems and subsystems. And—spoiler alert—Tononi has recently cast doubt on the ability of current computer systems to create the necessary “maximally irreducible” integrated system from which consciousness springs.)
It cannot be overstated how slippery consciousness is and how misleading our intuitive sense of the world is. We all know that our eyes are constantly darting from one thing to another, have a blind spot the size of the full moon, and transmit largely black-and-white data except for the 55 degrees or so in the center of our field of vision. And yet our vision seems stable, continuous, and fully imbued with qualities of color and shape. I remember my wonder when, as a child, I realized that I saw cartoons the same way whether I was sitting up or, more likely, lying down on the coach. How could that be?
Some people deny that subjective experience is a problem at all. Daniel Dennett is the clearest advocate of this view. His book “Consciousness Explained” hammers on the slipperiness of “qualia” (the “redness” of 650 nanometer electromagnetic waves, the “warmth” of 40 degrees Celsius). Dennett’s arguments are extremely compelling, but have never sufficed to convince me that there is not what David Chalmers labeled “the Hard Problem”—the problem of experience.
IIT is, at the very least, a bold attempt at a framework for consciousness that is quantitative and verifiable. A surprising (but welcome) aspect is that it allows that almost trivially simple machines could have a minimal consciousness. Such machines would not have the same qualia as us—their internal stories would not be the same as our stories, writ in a smaller font. Rather, a minimally conscious machine, whether attached to a photodiode or a thermistor, would experience a qualia of one bit of information: the difference between “is” and “not is.” I think of this as the experience of “!” (a “bang” of experience, if you will).
Such an experience is as far removed from human consciousness as the spark made from banging two rocks together is from the roiling photosphere of the sun. Your cerebral cortex has 20 billion or so deeply interconnected neurons, and on top of the inherent experiences enabled by that we’ve layered the profoundly complicating matter of language.
Tononi’s book “Phi: A Voyage from the Brain to the Soul” presents his theory in chapters featuring metaphorical dialogues between Galileo and various scientists and philosophers. In this way, it echoes portions of Douglas Hofstadter’s classic, “Gödel, Escher, Bach,” which also has positive feedback and recursion as essential ingredients of consciousness and intelligence (a theme elaborated by Hofstadter in his equally important, but less popular, “I Am a Strange Loop”).
Another approachable book on IIT, and which I actually prefer to Tononi’s, is Christof Koch’s “Consciousness: Confessions of a Romantic Reductionist.”
“Phi” is Tononi’s proposed measure for integrated information, but he does not define it rigorously in his book. His paper “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0” is much more substantial. His “Consciousness: Here, There but Not Everywhere” paper is in between. It is in this paper that he explicitly casts doubt on a simulation as likely to have the necessary “real causal power” of a neuron to create a “maximally irreducible” integrated system, “For the simple reason that the brain is real, but a simulation of brain is virtual.” It’s not clear to me why there would be a difference; a simulated phi seems to have simply shifted abstraction layers, not the arguments for or against its phenomenal reality.
If IIT is true, consciousness has causal power (that is, it affects the future calculations of its substrate, whether that be a brain or a machine), and our everyday programming models are fundamentally incapable of bootstrapping consciousness.
More tentatively, it may be that today’s machine architectures are incapable. If so, the recent leaps in machine learning and artificial intelligence may not continue for much longer. Perhaps another AI Winter is coming.