Topic: neural network

How hardware design needs to change to match up with AI needs

As artificial intelligence continues to evolve and grow, so must the hardware we use to power AI. During his keynote address at the International Solid-State Circuits Conference this week, Facebook vice president and chief AI scientist Yann LeCun discussed the next step for reaching AI’s potential: shifting away from GPUs and moving more towards dedicated chips.  … continue reading

Transform-XL from Google makes long-term context in neural networks more practical

Google has proposed a new approach to natural language understanding architectures beyond a fixed-length context. Transformer-XL aims to make long-range dependence more practical in neural networks by using attention models. According to the company, long-range dependence is the contextual ability humans have to parse information that depends on something they’d read much earlier in a … continue reading

SD Times news digest: Atom pull requests, MIT CSAIL’s depression recognition model, and dtSearch’s Intraspexion update

GitHub wants to make it easier to view and interact with pull requests in Atom thanks to a new GitHub package. Recent pull requests will now display information such as the author’s avatar, title of the pull request, pull request number, CI status, and “Last Updated” details. Clicking on a pull request will launch a … continue reading

DeepMind AI model can learn from own memory, Zuckerberg seeks voice actor for home AI, and AWS/VMware deliver new vSphere cloud offering—SD Times news digest: Oct. 14, 2016

DeepMind, an artificial intelligence firm that is now under the Alphabet umbrella, has developed differentiable neural computers (DNCs), which can learn from examples like neural networks, but can store complex data like actual computers. When DeepMind designed DNCs, it wanted to have machines that can form and navigate complex data structures on its own. Inside … continue reading Protection Status