Someday, your technology will be able to help you in ways you could only imagine, or at least developers and researchers are hoping so. They envision a world in which technology learns from the user, and takes actions accordingly. It is a goal that comes out of the idea of brain-inspired computing: software that runs on computers, but functions like the brain rather than a conventional computer.
“Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions,” said John Wagner, cognitive sciences manager at Sandia National Laboratories.
Evolving technology to mimic the brain
As one can imagine, developing a brain-inspired computer or device is not a small feat, which is why it has been researched since before the 1950s. In 1943, neuroscientist Warren S. McCulloch and logician Walter Pitts demonstrated how simple devices based on nerve cells could perform logical operations in the paper “A Logical Calculus of the Ideas Immanent in Nervous Activity.”
In the 1960s, a lot of works revolved around more complex devices that mimicked nerve cells, and by the 1980s computers were powerful enough to provide vision, data classification and artificial intelligence, according to Anders Sandberg, James Martin research fellow at the Future of Humanity Institute at Oxford University.
While the progress of brain-inspired computers has been slow, the past couple of years have seen it become more prominent. The fundamental training algorithms and neural network architectures are very similar to where brain-inspired computing was in the 1980s, according to Trishul Chilimbi, principal researcher at Microsoft Research. But changes in the industry and technological breakthroughs have added to today’s prominence.
Moore’s Law: A term that was coined around 1970s, Moore’s law states that every two years the overall processing power for computers will double. “With the advance of Moore’s law, we have much more computational power at our fingertips for training these systems,” said Chilimbi.
The Internet: “With the Internet and interconnectivity of devices and sensors, we have a lot more data for training these systems,” said Chilimbi. “We have computation to train much larger neural networks, and we have sufficient data to train these large networks, so I think that is a big thing that’s made a difference and why we are seeing a lot of progress right now.”
Big Data: “Big Data technologies have provided both the performance increases and cost reductions necessary to process data, content, video, audio, and images in a way that was just not possible 50+ years ago when AI started,” said Michele Goetz, Forrester analyst. “The amount and type of information wasn’t available in a digital format that could be a corpus for knowledge until now.”
Skills: “Now that systems, vastness of data, and digital are here, the demand for better insight and improved engagement and outcomes is creating an environment for not only new technology capabilities to do AI, but we are building a workforce that will be able to harness it,” said Goetz.
How is it possible?
Technology is beginning to discover the underlying meaning of interactions, perceptions and environments, and automatically adapting to our behaviors, but how?
“Brain-inspired computing or understanding the brain or reverse-engineering the brain is arguably one of the greatest outstanding scientific challenges facing mankind,” said Sandia Labs’ Wagner.
Wagner believes there are three ways of doing it:
1. On conventional software and on conventional hardware. “There are a lot of groups doing machine learning and extensions of it,” he said. “Companies are building neuron-inspired cores or chips that will be embedded in your next-generation portable device, cell phones and tablets.”
2. Enhanced hardware for doing machine learning better, but would still require writing on servers and conventional machines. “In the end you need a piece of hardware to do that function, and depending on how the things writing on top of the hardware are structured, you are going to need more power and more space, or ultimately not that much power or not that much space,” said Wagner.
3. The novel architecture approach to representing machine learning, and processing and storing information: neural networks. Neural networks are computational models inspired by the brain’s central nervous system, according to Wagner.
While he suggests neural networks as one attempt to reaching brain-inspired computing, he says it isn’t the most desirable approach.
“For many years there has been a lot of research in artificial neural networks, but when you start digging into the details on how those artificial neural networks work, they really aren’t being very faithful to how the brain actually computes,” said Wagner. “The brain, when it is computing, is computing at scale, so studying a few neurons might give you some clue as to what the brain is doing, but you won’t really see the phenomenon that you are really after until you are computing on a fairly large scale.”
One brain-inspired computing project currently in the works is experimenting with the idea of deep neural networks. Project Adam, an artificial intelligence project from Microsoft Research, takes multiple layers of neurons instead of the traditional neural approach that just contains a single layer.
“One thing that we wanted to explore in Project Adam was does the size of neural networks matter, and if you increase the size and train them on more data, does scale really matter?” Chilimbi said. “The reason for believing that it might matter is because our brain is an absolute massive neural network with trillions of connections, so it might matter, and to do that we built a large-scale training system to train these massive neural networks.”
Project Adam takes an aggressive asynchronous approach where machines operate autonomously on data they see and periodically communicate and exchange information with the rest of the machine, according to Chilimbi.
“The surprising thing is that not only does this system still learn, but actually it turns out this asynchrony enables the system to actually learn better than if you didn’t have the asynchrony. Now you have a faster system because it is asynchronous, [which also means it is] better at training a system because it learns better,” he said.
When will brain-inspired computing reach its full potential?
Today there are already early stages of brain-inspired computing being used by consumers. There’s speech recognition, image recognition, and location services for smartphones and other devices, and Google’s intelligent personal assistant, Google Now, answers questions, makes recommendations, performs actions, predicts what a user will want, and shows advertisements based on a user’s search habits. But those are only starting points.
In the future, the evolution of brain-inspired computing could enable buildings to personally optimize information, communication, lighting and climate for employees depending on what they are doing. Brain-inspired mobile devices could provide accurate nutritional information based on a picture. E-commerce sites could become concierge services helping shoppers find items, gifts, or even assist with event planning. The possibilities are endless, according to Forrester’s Goetz.
“The key will be how the intelligence is implemented in our everyday environment,” she said. “Today, the implementations are about getting answers through things like natural language processing. The opportunity is where you put this ‘brain.’ ”
So, what’s keeping the technology?
“We don’t fully understand the brain; we just understand limited things, but the brain has a lot of complexity and other stuff we don’t understand,” said Microsoft’s Chilimbi.
One of the challenges involves scaling. Brain-inspired computing on a conventional computer doesn’t scale very well because there are neurons communicating with each other all the time, so there’s a scaling problem for any group of neurons that are collaborating with each other, according to Wagner.
“We don’t think we are going to get around that scale problem until the hardware itself actually built to support that kind of massive communication,” he said.
There have been advancements in the area, but the advancements still don’t match the brain. Microsoft’s Project Adam has been able to train deep-learning large neural networks to hundreds of billions connections, but the brain has hundreds of trillions of connections.
“The brain is not mysterious; it is merely very complex and messy,” said Oxford’s Sandberg. “The basic functions of simple units sending simple signals in a network are well understood. It is just that when you connect them in clever ways you can get emergent phenomena that are surprisingly smart. Much of the secret lies in adding learning rules to the units and then providing them with data.”
Commercial apps that harness some form of brain-inspired design are already in the works, but Wagner says it is going to take some breakthroughs such as the scaling issue for certain things to fall into place for brain-inspired computing to reach its potential.
“I think there are incremental improvements going on right now and in a lot of different areas, so it isn’t like we are going to wait until a brain-inspired computer is done and then say here it is,” he said. “It is going to be stops along the way.”
Why brain-inspired computing?
“Part of the reason for trying to mimic the brain is to try to understand why the brain is so much better at recognizing patterns, and how can it do some of these tasks like speech recognition, natural language understanding and vision so much better than computers,” said Chilimbi.
But image recognition, speech recognition, and technology that adapts to its environment and humans aren’t the only benefits to be reaped from brain-inspired computing.
One major benefit researchers hope to get out of brain-inspired computing is energy. “Our brain is our example of powerful computers that are only using about 20 watts of power,” said Sandia’s Wagner. “Brain-inspired computers could provide a magnitude of power improvements.”
According to Sandberg, the trick is that the brain is noise-resistant and doesn’t stop functioning just because of small sums of errors or failed parts, so one way he says brain-inspired computers could be energy-efficient is using imperfect and cheaper chips or low-energy but noisier computations.
Other benefits include processing speed and the ability to tackle problems today’s computers can’t touch.
“In the end you are trying to observe something in the environment and make a prediction on what is going to happen next,” said Murat Okandan, microsystems researcher at Sandia Labs. “We can do that with our conventional machines, but with limitations in terms of power and space. If we are able to develop systems that can do much higher functionality, being able to predict things, being able to predict behaviors of objects that you’ve never seen before, you can make analogies seen in the past and be able to deploy that in a very small form factor with very low power consumption. That’s a very attractive goal.”
The damage of brain-inspired computing
With any technology, there are repercussions, especially those designed to track and act on a user’s every move.
“This is going to be something that is going to affect all of us, and I think we need to have an open dialogue on what are the issues and risks,” said Microsoft’s Chilimbi.
Possible implications include:
Unethical use: Privacy has already been a concern with the Internet and the recent revelations from ex-NSA agent Edward Snowden about the NSA’s spying techniques. This technology is going to be gleaning information and data from a user and making decisions based on it. But what else will the technology do with that data? Where will the data be stored?
“I think the thing with technology such as this…is we tend to overestimate their impact in the short term, but underestimate their impact in the long term,” said Chilimbi.
Validation: “Brain-inspired computing is hard to validate; you can never be sure it is 100% correct or will always behave well,” said Sandberg.
Intelligence: “In the long run, brain-inspired systems may become very smart and hard to control,” said Sandberg.
“Can an intelligent machine break security firewalls while looking for answers?” said Goetz. “The real issue is striking the right balance between anticipating risk, but allowing the growth of a highly valuable capability. Government regulations have not caught up with today’s technology, and the legislation being passed—for example net neutrality—are being made to suit point-in-time business concerns and may not take into account the precedents being set, or may restrict further technical evolution.”
Other concerns include creepiness; inability to correctly vet bogus information; human input that biases learning; questionable use such as insurance eligibility and employment eligibility; and accidental data and knowledge acquisition, according to Goetz.
Types of brain-inspired computing
Brain-inspired computing is a broad term with many different aspects. In its basic form, brain-inspired computing is “methods of solving problems based on how parts of the brain solve them,” said Sandberg. But there are many different techniques that are inspired by the term. Here are some of them:
Artificial Intelligence: AI is intelligence exhibited by machines. Examples include Apple’s Siri, Microsoft’s Cortana and Project Adam, and the BlackBerry Assistant.
“AI is allowing us to not just use or benefit from technology, but we are no longer having to adapt our behavior to machines,” said Forrester’s Goetz. “Machines are adapting and understanding humans’ natural way of communicating and interacting in the environment. AI will not only augment our knowledge, it will scale that knowledge and automate for our purposes in ways we haven’t been able to do before.”
Contextual Computing: The core of contextual computing is the idea that a device can collect information about a user’s world and use that information to influence the behavior of a user’s program or application.
Cognitive Computing: “Cognitive computing typically means some form of brain- or psychology-inspired computing. The system is supposed to work a bit like real minds. Some of this might be just rebranding artificial intelligence,” according to Sandberg.
Deep Learning: Deep learning is a set of algorithms in machine learning that builds and trains neural networks to make it possible for machines to understand and represent the world. It can then use that information to help users make smarter choices.
Machine Learning: A part of artificial intelligence that focuses on how systems can learn from data. It is one of the most valuable tools in designing artificially intelligent systems. Examples include an e-mail system that automatically knows what to mark as spam based on learning what spam looks like.
Brain-inspired computing is already in the works
Agent: An agent is an example of a contextual computing application. It detects when a user’s battery is low and can start to preserve power. It can tell when a user is sleeping and automatically silence his or her phone. It perceives when a user is in a meeting, and even recognizes when a user is driving.
IBM: IBM’s supercomputer, Watson, is known as a cognitive technology that functions more like a human than a computer. It has been in development since 2005 and is famously know for its “Jeopardy” win over Ken Jennings and Brad Rutter in 2011. Watson can read and understand natural language, learns from tracking user feedback on its successes and failures, and relies on hypothesis generation and evaluation.
Microsoft: Microsoft’s recently announced initiative, Project Adam, designed to prove larger deep neural networks on more data improves task accuracy.
“The end goal is can we use machines to understand, represent and help explain the world and understand us, and kind of marry that together?” said Microsoft’s Chilimbi.
Currently, Adam uses 30x fewer machines than other systems, can recognize objects 50x faster than other systems, and is twice as accurate.
Sandia Labs: Sandia Labs is currently working on brain-inspired computing in a long-term research project on future computer systems.
“We’re evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it,” said Sandia’s Okandan.
Wagner’s group is “focused just a little bit more on the algorithms, so we are working in several areas,” said Wagner. “One of the areas that we are most excited about comes from neuroscience, known as neurogenesis. It is the process in the brain in the hippocampus where the brain is actually adding new neurons, new cells to help identify patterns that you haven’t seen before. The old neurons are sort of established at recognizing patterns that you have seen before, so one way you are going to accommodate new experiences is adding new cells through the process of neurogenesis.”
Qualcomm: Qualcomm is working on Qualcomm Zeroth processors, which mimic the human brain and nervous systems to provide devices with embedded cognition. The company wants its processors to not only act like human brains, but also learn like brains. Qualcomm uses mathematical models that function like neurons when sending, receiving or processing information.