AIart

Someday, your technology will be able to help you in ways you could only imagine, or at least developers and researchers are hoping so. They envision a world in which technology learns from the user, and takes actions accordingly. It is a goal that comes out of the idea of brain-inspired computing: software that runs on computers, but functions like the brain rather than a conventional computer.

“Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions,” said John Wagner, cognitive sciences manager at Sandia National Laboratories.

Evolving technology to mimic the brain
As one can imagine, developing a brain-inspired computer or device is not a small feat, which is why it has been researched since before the 1950s. In 1943, neuroscientist Warren S. McCulloch and logician Walter Pitts demonstrated how simple devices based on nerve cells could perform logical operations in the paper “A Logical Calculus of the Ideas Immanent in Nervous Activity.”

In the 1960s, a lot of works revolved around more complex devices that mimicked nerve cells, and by the 1980s computers were powerful enough to provide vision, data classification and artificial intelligence, according to Anders Sandberg, James Martin research fellow at the Future of Humanity Institute at Oxford University.

While the progress of brain-inspired computers has been slow, the past couple of years have seen it become more prominent. The fundamental training algorithms and neural network architectures are very similar to where brain-inspired computing was in the 1980s, according to Trishul Chilimbi, principal researcher at Microsoft Research. But changes in the industry and technological breakthroughs have added to today’s prominence.

Moore’s Law: A term that was coined around 1970s, Moore’s law states that every two years the overall processing power for computers will double. “With the advance of Moore’s law, we have much more computational power at our fingertips for training these systems,” said Chilimbi.

The Internet: “With the Internet and interconnectivity of devices and sensors, we have a lot more data for training these systems,” said Chilimbi. “We have computation to train much larger neural networks, and we have sufficient data to train these large networks, so I think that is a big thing that’s made a difference and why we are seeing a lot of progress right now.”

Big Data: “Big Data technologies have provided both the performance increases and cost reductions necessary to process data, content, video, audio, and images in a way that was just not possible 50+ years ago when AI started,” said Michele Goetz, Forrester analyst. “The amount and type of information wasn’t available in a digital format that could be a corpus for knowledge until now.”

About Christina Mulligan

Christina is the Online & Social Media Editor of SD Times. She is a 2012 graduate of Stony Brook University’s School of Journalism, graduating with a Bachelor's degree in broadcast journalism and a concentration in public affairs. She has interned at WNET Metrofocus, WABC Eyewitness News and Newsday.