Machine learning isn’t the only term getting all the buzz. Deep learning, or a class of machine learning algorithms, is showing great promise, primarily because it’s getting results.
Deep learning historically was largely inaccessible because it had such high demand on computational resource and data, but with the progression of technology, storage costs have come down and the computation has gone up, said CEO of Bonsai, Mark Hammond. It demands a lot of resources, but there are a few organizations focusing on making this technology and research accessible and easy to use (think: OpenAI or Google’s DeepMind).
According to vice president and principal analyst at Forrester Research, Mike Gualtieri, machine learning is trying to find a predictive model, whereas deep learning is based upon a hierarchical network, roughly fashioned after the human brain, he said. Since it’s modeled after the brain, people argue that deep learning actually lets systems “learn,” said Gualtieri.
RELATED CONTENT: The realities of machine learning systems
“Does it learn the way we learn? Roughly,” said Gualtieri. “The reality is we are not sure. It’s a very narrow scope; you can train a machine, but it’s always about predicting one thing. Our brain and your brain, we can do much more than just predict the next move someone likes. It’s not generalized learning.”
Deep learning is a subfield of machine learning based on neural networks, which was first written about in 1943 by neurophysiologist Warren McCulloch and mathematician Walter Pitts. In order to describe how neurons in the brain might work, they modeled simple neural networks using electrical circuits. According to Gualtieri, deep learning neural networks were hard to train, and around the year 2012, there was a research breakthrough that made it very practical to do deep learning. The reason why deep learning is just now getting all the attention is because it’s making new state-of-the-art results.
“The fact that these technologies are easier to apply, easier to scale, easier to get good results, means there is just a huge range of applicability,” said chief application architect at MapR, Ted Dunning. “Maybe even in three years it will be a standard part of a developer’s toolkit to build simple [deep learning] models and integrate them into their systems.”
Machine learning, and specifically deep learning, has allowed researchers to do a number of things substantially better than before, said Ash Munshi, CEO of Pepperdata.
For instance, using deep learning, we can understand images, speech, translate language, play games, and build self driving cars — all of these advancements were not possible just a few years ago, said Munshi.
Getting started with machine and deep learning
There are plenty of open-source frameworks for deep learning and machine learning, but choosing one to implement depends on the goals and problem at hand. Here are some popular deep learning and machine learning frameworks and projects found on the Internet and recommended by some of our machine learning experts:
Keras: A high-level neural networks API written and Python. It can run on top of TensorFlow or Theano. Use Keras for a deep learning library that supports fast prototyping, convolutional networks and recurrent networks, and if you want to work in Python code.
TensorFlow: This popular open-source deep learning framework leverages Google’s infrastructure for scalable training. It provides rich higher level tools for language, image and video understanding.
Theano: A Python library that lets developers define and evaluate mathematical expressions involving multi-dimensional arrays. It uses GPUs and performs efficient symbolic differentiation.
Mxnet: An open-source framework built by Amazon. Mxnet produces models that are compact, require less memory and CPU for inference.
Caffe: A deep learning framework made with expression, speed and modularity in mind. Speed is a big feature in Caffe; it can process over 60 million images per day with a single NVIDIA K40 GPU.
Sonnet: This is a library built on top of TensorFlow (TF) for building complex neural networks. Google’s DeepMind is developing the codebase for building neural network modules with TF. Models written in Sonnet can be freely mixed with raw TF code and other high-level libraries.
Scitkit Learn: This is a toolkit for doing classical machine learning, since not all problems need the power of deep learning. Scitkit provides a rich set of tools for data mining and data analysis.