Technology leaders are looking to bring artificial intelligence out of its infancy to make breakthroughs in cognitive solutions.
IBM Research announced it is teaming up with the Department of Brain and Cognitive Sciences (BCS) at MIT to accelerate the development of machine vision. Together, the organizations will make up the IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension (BM3C).
BM3C is a multi-year collaboration to develop cognitive computing systems that resemble how humans understand audio and visual information. BM3C researchers will look into pattern recognition and prediction methods, as well as next-generation models to advance machine vision.
(Related: UC Berkeley opens up a research center for AI)
According to the organizations, the ability to understand and integrate inputs from a variety of audio sources and visual information can be useful in healthcare, education and entertainment industries.
“In a world where humans and machines are working together in increasingly collaborative relationships, breakthroughs in the field of machine vision will potentially help us live healthier, more productive lives,” said Guru Banavar, vice president of cognitive computing at IBM Research. “By bringing together brain researchers and computer scientists to solve this complex technical challenge, we will advance the state of the art in AI with our collaborators at MIT.”
According to the researchers, in order for the cognitive systems to be able to learn, interact with and make decisions from massive amounts of unstructured data, they will need to make advances in machine learning, reasoning, vision, decision techniques, and computing infrastructures.
BM3C will be led by James DiCarlo, head of MIT’s BCS department, and supported by researchers and graduates from BCS as well as MIT’s Computer Science and Artificial Intelligence Lab. In addition, the organization will work with IBM scientists and engineers, who will provide knowledge and information from IBM Watson.
“Our brain and cognitive scientists are excited to team up with cognitive computing scientists and engineers from IBM to achieve next-generation cognitive computing advances as exposed by next-generation models of the mind,” said DiCarlo. “We believe that our fields are poised to make key advances in the very challenging domain of unassisted real-world audio-visual understanding and we are looking forward to this new collaboration.”
Google preps API.AI
Meanwhile, Google recently announced the API.AI conversational voice interaction provider for mobile and web is joining the company. This is part of Google’s commitment to invest and advance core machine learning technology. API.AI has been used in chat bots, connected cars, smart homes, mobile devices, robots, and wearable devices.
“Our vision has been to make technology understand and speak human language and help developers build truly intelligent conversational interfaces for their products and services,” wrote Ilya Gelfenbeyn, CEO of API.AI, in a blog post.
Google hasn’t revealed how the companies will be integrated, but API.AI said it will continue to build its conversational interfaces and improve its platform.
“API.AI offers one of the leading conversational user interface platforms, and they’ll help Google empower developers to continue building great natural language interfaces,” wrote Scott Huffman, vice president of engineering at Google, in a blog post.