Machine learning brings new opportunities in the software security realm by offering new ways to handle data, detect malware and improve solutions. However, the problem with machine learning today is that it can be marketed as a silver bullet to catch all things in the security industry, when in reality the technology still has many weaknesses. Hyrum Anderson, technical director of data science for cybersecurity provider Endgame, presented research on machine learning malware evasion at this week’s Black Hat USA 2017 conference in Las Vegas.

“I want you to know I am an advocate of machine learning for its ability to detect things that have never been seen,” Anderson said. “[But] machine learning has blind spots and depending on what an attacker knows about your machine learning model, they can be really easy to exploit.”

Anderson explained, machine learning is not only just susceptible to evasion attacks, but it is susceptible to these attacks by other machine learning methods. Researchers at Endgame have learned it is not only enough to provide a cybersecurity system, they have to check and double check the product as well as test and think about how adversaries might exploit or evade them. “If an attacker has access to your machine learning model, he can actually ask it ‘What can I do to confuse you the most,’ and the model will tell them.”

Thinking like an attacker brought Anderson to look at the uncomfortable side of machine learning. At Black Hat, he presented a situation where an attacker can not only attack the deep learning models he knows everything about, but attack any model that is a black box and he knows nothing about. To demonstrate this, Anderson created a malware detection training system with OpenAI’s gym framework, and enabled researchers to simulate a game like situation against antivirus engines.

“We demonstrate how to evade machine learning malware detection by setting up an AI agent to compete against the malware detector that proactively probes it for blind spots that can be exploited. We focus on static Windows PE malware evasion, but the framework is generic and could be extended to other domains,” according to Anderson’s research.

The framework takes a game-like approach, accesses the system, learns about the system, and figures out how it can be attacked and how it can evade an attack. “Reinforcement learning has produced models that top human performance in a myriad of games. Using similar techniques, our PE malware evasion technique can be framed as a competitive game between our agent and the machine learning model detector. Our agent inspects a PE file and selects a sequence of functionality-preserving mutations to the PE file which best evade the malware detection model. The agent learns through the experience of thousands of “games” against the detector, which sequence of actions is most likely to result in an [invasive variant.] Then, given any new PE malware that the agent has never before seen, the agent deploys a policy that results in a functionally-equivalent malware variant that has a good chance of evading the opposing machine learning detector.

As part of his research, Anderson is releasing a machine learning malware detector into open source as well as the framework users can use to improve the AI agent, improve the malware, or attack their own models to learn about their weaknesses. “The framework that we’re providing can be readily adapted to attack your own machine learning model. To be clear, there are easier ways to attack your machine learning model since you know everything about it. But this framework represents what we believe to be the most realistic attack that an adversary can launch and that can be used to understand your model’s blind spots,” he said.

While Anderson explained there has been no evidence to suggest adversaries are using artificial intelligence and machine learning in quite as a sophisticated way to bypass machine learning models, it is important to understand these models and how to defend against it before a motivated adversary tries to exploit it.

“This research is a bit of reality check: Knowing absolutely nothing about your malware model or AV engine, an adversary can launch a reinforcement learning attack that learns to bypass it. Evasion rates are low with this proof of concept, but clearly move ‘in theory’ to ‘in practice,’” Anderson said.