With all of the advancements made in artificial intelligence and machine learning today, there seems to be an influx of tools and solutions that are starting to leverage cognitive capabilities. But do these tools and solutions actually reflect the true meaning of AI?
According to Jonas Nwuke, platform manager for IBM Watson, AI (or cognitive computing) is “intended to help people make better decisions. The system learns at scale, gets better through experience, and interacts with humans in a more natural way.”
The problem in today’s software development industry is that because AI is such a big umbrella term, it often gets misused or overused, according to Todd Anglin, chief evangelist at Progress. “In some ways it is going to get overused just by its very nature. It stands for a lot of things, but as a result a lot of times it gets used somewhat inappropriately,” he said.
(Related: Microsoft devotes 5,000 workers to AI, joins industry partnership)
For instance, AI is a great marketing term and developers will often just slap it onto their solutions to make them stand out, even if they really aren’t using true AI capabilities, Anglin explained. “The average user of an app will assume since it has AI in it, it has to be smart, and it has to be great,” he said.
In other cases, developers will use the term to convey that the application is trying to do something for the user. For example, Anglin said Tesla uses the term “self-driving cars” for its vehicles because it is easier for the larger community to understand, but in reality the vehicles aren’t completely self-driving because they need a human to babysit the wheel in case something goes haywire.
And then there are developers who just don’t understand the true meaning of artificial intelligence. AI’s recent successes has made it significantly easier for developers to download a package from the Web and train it on a single data set such as image recognition, visual processing, or natural-language processing. But that is not the meaning of AI, according to Massimiliano Versace, CEO of Neurala, a company whose mission is “to make software smarter.”
“Artificial intelligence is really taking the brain, and trying to emulate it in software,” he said. “The brain is more than just recognizing an object. It is thinking. It is perceiving. It is action. It is emotion.”
According to Versace, there are three main ingredients that are necessary for true intelligence: the brain, the body, and the mind. The brain consists of the algorithms or mathematics behind the software, working with incoming data. The body is the hardware the intelligence lives in. And the mind is the computing power that runs the algorithms.
Today, all three ingredients are converging, making it easier and cost-efficient for the field to take off, Versace explained.
“AI is in a very good path right now. The challenge is creating an application that goes beyond just image recognition,” he said.
Anglin believes simple image recognition, object detection, and challenge-and-response systems do use some level of AI, but developers need to be a bit more judicious on how they label it in their applications. Instead, developers should focus on the underlying concepts like machine learning or deep learning. “We have to spend more time looking at those things and how they are being leveraged or used in software.” he said. “When we talk about AI, what are we really saying?”
Gartner researchers Tom Austin, Alexander Linden, and Martin Reynolds recently released a report on how the industry should define and use smart machine terms effectively. According to the report, people should use descriptive terms that differentiate artificial intelligence from human intelligence, and ignore marketing terms like “AI” or “cognitive capabilities.” Instead, Gartner believes the use of the term “smart machines” is a more appropriate and least objectionable term.
“Assigning human attributes to technology distorts our understanding of what that technology can truly accomplish,” according to Gartner’s research. “Smart machine technologies adapt their behavior based on experience, are not totally dependent on instructions from people (they learn on their own), and are able to come up with unanticipated results.”
The digital brains behind AI
Artificial Intelligence is a feature developers and businesses are striving to include in their services and solutions; but those businesses and developers are going to have to leverage digital brains that are already in place, according to Progress’ Anglin.
He explained that, as more people start using machine learning and AI-based systems, those systems will get smarter and smarter, in turn making them better and encouraging more people to choose them.
According to Anglin, the major digital brains in the space are:
Facebook: Facebook has an AI research department that is dedicated to advancing machine learning and developing intelligent machines. Recently the company open-sourced its AI hardware design, codenamed Big Sur, which handles AI computing at a large scale. It also announced new algorithms such as DeepMask, a segmentation framework; DeepText, the company’s deep learning-based text understanding engine; SharpMask, a segment refinement model; and MultiPathNet, an object detection solution. In addition, Facebook CEO Mark Zuckerberg announced plans to unveil his personal AI assistant to the world very soon.
Google: Google recently open-sourced its TensorFlow library for machine learning. According to Anglin, TensorFlow is a little bit more complex than other cognitive services, but it wraps up a lot of scientific programming and packages it in a way that the average application developer can leverage and embed in their applications.
IBM Watson: IBM Watson is a cognitive system that is designed to understand data, reason, and learn at scale. It provides cognitive APIs that leverage natural language processing and machine learning—among other things—to analyze data, learn from data, and derive insights. “There is value to be gained from systems that go beyond general abstractions and reason in specialized ways,” said IBM’s Nwuke.
Microsoft: Microsoft provides cognitive services that allow developers to build Android, iOS and Windows apps using powerful intelligence algorithms. The services include APIs for vision, speech, language and knowledge.
“We will have our hands full of very effective digital brains from these major players, and people choosing to go out on their own will have a hard time really competing,” said Anglin.
However, Versace said Neurala is already building its own digital brain and is ahead of the game, despite being a smaller company. “We have a long history of building the building blocks of an artificial brain down to an individual neuron and up to big systems of hundreds of millions or even billions of neurons connected with synapses,” he said.
The company recently released the Neurala Brains for Bots SDK, allowing companies to integrate deep learning into their applications. According to Versace, the SDK is something the major players haven’t been able to provide as of yet.
The technology currently targets drones, enabling them to learn, recognize, find and follow objects. But Versace explained it can be applied to computers, phones and other machines. “We are not going to build a vertical. We are going to build a platform and let people build a vertical,” he said.
Should we fear artificial intelligence?
There is a fear that artificial intelligence will not only replace our jobs in the future, but that the machines will also get so smart they will take over and end civilization “Some people attach a stigma to artificial intelligence, they think the technology somehow challenges or endangers the world as we know it,” said IBM’s Nwuke. But he believes the future of artificial intelligence just means machines will be able to bring more value to businesses, professionals and consumers.
“The technology offers a level of collaboration between man and machine that augments and expands human efforts,” he said.
Society creates that stigma because of the way AI is sometimes portrayed, according to Progress’ Anglin. Movies have this classic AI villain that takes over, but that notion is a little bit overblown, according to him. The problem is that most of society doesn’t have enough of understanding of the technology.
“It is less feared in engineering circles with developers because there is a slightly more hands-on understanding of the technology,” said Anglin. “It is less magic and more machine.”
Anglin doesn’t doubt that there is a potential for AI to be abused or used for harm in the future, but that is a possibility with any tool and technology, he explained. “I would say, broadly speaking, it is not the artificial intelligence that will get us there. It is humans that will get us there,” he said.
After all, people will be making and teaching the machines, and they have to be responsible for this technology. It is a developer’s job to make sure they put the right safeguards in place and educate people on the benefits of the technology, Anglin explained.
“There are many fantasies about AI that people perpetuate, beginning with the assumption that we can build an artificial intelligence,” said Gartner’s Austin. “We can’t. If too many senior executives buy into anthropomorphic assumptions about conversational interfaces—for example, they are indistinguishable from people [or] they can learn by observing everything they need to know to replace all the people in your call center—then too many projects will fail and be shut down.”
The next step
Today, while AI is most commonly cited for image recognition, natural language processing and voice recognition, this is just the very beginning of what we think of as learning, according to Anglin.
“Our current AI state is that of a toddler,” he said. “It can understand what it sees, what it hears, and then tell you what it is seeing.” This is just the early stage for AI, but for businesses to really gain value from it, the next step will be the ability to make more connections and reasoning about the connections between objects, he explained.
“We are progressing down a path from going from a toddler to a slightly more capable learning machine,” said Anglin.
Neurala’s Versace wants to see the industry go beyond what people think of AI. “AI means being able to have a piece of software that is functionally indistinguishable from a human,” he said. “There is an infinite amount of applications AI can be applied to.”
With the explosion of platforms and sensors, Versace said AI is now more important than ever. “Every job where there is a human looking at a screen, we can get rid of the human and process their job with AI so they can go do other things,” he said.
From here, developers and data scientists will have to look at how we understand the concept between objects, and go beyond just the very basics of picking out an object and understanding everything in a picture.
“This is a moment in time where we all need to collectively lift our heads as a software developer community and say, ‘Alright what are the major patterns that we could start applying to user software and apply it differently?’” said Anglin “We will see a lot of that happen over the next two to three years where people really hit the pause button, think about the next era of software they are going to create, and figure out what those scenarios look like.”
IBM’s Nwuke already sees it happening with the company’s customers. “Developers have started commercializing their ideas across retail, health, banking, sports and more. We’re inspired by what this community has created, and together, we’re building a future where cognitive technologies will positively impact every facet of our lives, both at work and at home,” he said.
In the short term, Ilya Tabakh, founder of Edge Up Sports, predicted we will see better voice recognition, better understanding of body posture, the ability to understand the emotional state of a user, and more technologies that improve our lives.
The point of it all is to solve a pain point for users, according to Anglin. “Developers have to be mindful and not rush a technology into an app or in front of a user if it does not in fact actually make the user’s life better,” he said. “How do we really apply this intelligently, not just apply it in a haphazard kind of way?” z