Facebook has built a deep learning-based text understanding engine called DeepText, which can understand with near-human accuracy the textual content of more than a thousand posts per second, covering more than 20 different languages.
DeepText works because of deep neural network architectures, including convolutional and recurrent neural nets. DeepText uses FBLearner Flow and Torch for its model training, and it can perform world-level and character-based learning, according to a Facebook blog post.
Deep learning is helping engineers get machines closer to how humans understand text. To do this, computers need to understand slang and word-sense disambiguation, according to Facebook. The example Facebook uses is “I like blackberry,” which a machine would have to determine if the subject is either the phone or the fruit.
This text understanding on Facebook is what creates some problems, but using deep learning, Facebook can understand text across multiple languages more efficiently than traditional NLP techniques.
DeepText has built on and extended ideas in deep learning that were originally developed in papers by Ronan Collobert and Yann LeCun from Facebook AI Research, according to Facebook. DeepText technology and its applications are going to advance in the near future by collaborating with the Facebook AI Research group.
Uber turns to the Middle East for cash
In a report in The New York Times, Uber said that it has turned to the Middle East for its biggest infusion of cash from one single investor, raising US$3.5 billion from Saudi Arabia’s Public Investment Fund. This is one of the largest-ever investments into a privately held startup.
This investment took months to finalize, and it does not cash out any of Uber’s existing investors, according to the Times. Saudi Arabia has plans to fix its economy by reducing its dependence on oil, and Uber has viewed the Middle East as an area to expand its ride-sharing company.
Since Saudi Arabia does not allow women to drive, Uber provides a much-needed service to women, and roughly 80% of Uber’s riders in Saudi Arabia are women, according to the company.
IBM and Cisco partner up
IBM and Cisco are teaming up to provide instant Internet of Things (IoT) insights, so businesses in remote and autonomous locations can use the power of IBM’s Watson IoT and the technology of Cisco’s edge analytics to understand their data on the network edge.
Companies need to tap into their vast amounts of data so they can have real-time insights, and to address this issue, Cisco and IBM and offering a new approach that targets companies operating on the edge of networks. This includes oilrigs, factories, shipping companies and mines, or any other company where they might be lagging in bandwidth.
“The way we experience and interact with the physical world is being transformed by the power of cloud computing and the Internet of Things,” said Harriet Green, general manager for IBM Watson IoT. “For an oilrig in a remote location or a factory where critical decisions have to be taken immediately, uploading all data to the cloud is not always the best option.”
Workers in areas where there is limited bandwidth now have the ability to monitor the health and behavior of machinery so they can prepare for better upgrades and maintenance.
Companies that are working with IBM Watson IoT and Cisco Edge include Bell Canada, a communications company in Canada; the port of Cartagena, Colombia; and SilverHook Powerboats, a company that designs high-speed racing watercraft.
Google’s goals for Project Magenta
The Google Brain team officially announced Project Magenta, a project that aims to answer the question, “Can we use machine learning to create compelling art and music?” For Project Magenta, the Google Brain Team will use TensorFlow, and release their models and tools in open-source on GitHub.
Magenta’s goal is to serve as a research project to advance the current state of the art in machine intelligence for the art and music generation. Machine learning has been used in things like speech recognition, but Google wants to use it to explore art and music, and how the company can develop algorithms so an AI can create art on its own.
The other goal of Magenta is to build a community of machine learning researchers, coders and artists. Since Magenta is open source, anyone can begin using it to create their own music or art, and it allows researchers to connect with the machine learning models.
Researchers or coders can currently check out the alpha version code of Magenta. Once the Google Brain Team has a stable set of tools and models, it will invite external contributors to check in code on GitHub. For musicians or artists, Google is hoping that they will use these tools for their own creative purposes.