As artificial intelligence continues to evolve and grow, so must the hardware we use to power AI. During his keynote address at the International Solid-State Circuits Conference this week, Facebook vice president and chief AI scientist Yann LeCun discussed the next step for reaching AI’s potential: shifting away from GPUs and moving more towards dedicated chips. 

LeCun said such chips might facilitate high compute power with low power consumption, emulating the action of human neurons, in collaboration with software “dynamic networks,” neural networks that can restructure themselves automatically to fit a variety of situations.

“There are several types of hardware for deep learning that the industry needs at the moment,” LeCun said in his address. “There is one type of hardware which is very high-powered and doesn’t need to be particularly low-cost, and that is for training very large machine learning systems. That is required for a sort-of systematic exploration of what we can do with deep learning and AI. A second need is for training models that have already been designed — so let’s say you’re Facebook and you want to train a new translation system because you have more data, or you want to train an image recognition system because there are more objects that need to be recognized. You need a system that uses, for example, low precision arithmetics and consumes less power, is less expensive. So the need for fast communication between nodes in a system like this is not as stringent as for research and development, where you want a quick response.”

Another need, LeCun explained, is the hardware required to run a neural net at a data center, commonly used to order search results, curate newsfeeds on Facebook, requiring low-power but high-speed.

And finally, LeCun said that embedded devices — IoT connected cars, lawnmowers and vacuum cleaners, cameras, phones and AR goggles, will require “neural net accelerators” at even lower power consumption and cost.

“This might require that we reinvent the way we do arithmetics in circuits,” Lecun said. “There is sort of a standard way of computing products and sums in computers which make the results accurate. But for running neural nets, you don’t actually need that accurate of a computation. So people are designing new ways of representing numbers that would be more efficient.”

While LeCun explained that hardware leads the design of software, the development of dynamic neural networks means that hardware designers need to begin taking into account the specifics of such a fluid model and design new, dedicated systems accordingly.

“In the brain, for example, your neurons, at any one time, are only about 2 percent active,” LeCun said. “We call this ‘sparse activations.’ And what we would like is to have neural networks that are also very sparse, and what we need is hardware systems to be able to take advantage of this sparsity because it might be good for power consumption.”

While LeCun says that much of the hardware he describes is theoretical, he predicts a trend towards developing it over the next 5-10 years.