A couple of years ago, there was a lot of hype about using AI and machine learning (ML) in testing, but not a lot to show for it. Today, there are many options that deliver important benefits, not the least of which are reducing the time and costs associated with testing. However, a hands-on evaluation may be sobering.

For example, Nate Custer, senior manager at testing automation consultancy TTC Global, has been researching autonomous testing tools for about a year. When he started the project, he was new to the company and a client had recently inquired about options. The first goal was to build a technique for evaluating how effective the tools were in testing. 

“The number one issue in testing is test maintenance. That’s what people struggle with the most. The basic idea is that you automate tests to save a little bit of time over and over again. When you test lots of times, you only run tests if the software’s changed, because if the software changes, the test may need to change,” said Custer. “So, when I first evaluate stuff, I care about how fast I can create tests, how much can I automate and the maintenance of those testing projects.”

RELATED CONTENT:
AI and ML make testing smarter… but autonomous tools are a long way from being enterprise-ready
What to look for in a web and mobile test automation tool
Continuous testing isn’t optional anymore

Custer’s job was to show how and where different tools could and could not make an impact. The result of his research is that he’s optimistic, but skeptical.

There’s a lot of potential, but…
Based on first-hand research, Custer believes that there are several areas where AI and ML could have a positive impact. At the top of the list is test selection. Specifically, the ability to test all of what’s in an enterprise, not just web and mobile apps.

“If I want to change my tools from this to that, the new tool has to handle everything in the environment. That’s the first hurdle,” said Custer. “But what tests to run based on this change can be independent from the platform you use to execute your test automation, and so I think that’s the first place where you’re going to see a breakthrough of AI in the enterprise. Here’s what’s changed, which tests should I run? Because if I can run 10% of my tests and get the same benefit in terms of risk management, that’s a huge win.”

The second area of promise is surfacing log differences, so if a test that should take 30 seconds to run suddenly took 90 seconds, the tool might suggest that the delay was caused by a performance issue. 

“Testing creates a lot of information and logs and AI/ML tools are pretty good at spotting things that are out of the ordinary,” said Custer. 

The third area is test generation using synthetic test data because synthetic data can be more practical (faster, cheaper and less risky) to use than production data. 

“I’m at a company right now that does a lot of credit card processing. I need profiles of customers doing the same number of transactions, the same number of cards per household that I would see in production. But I don’t want a copy of the production data because that’s a lot of important information,” said Custer.

Self-healing capabilities showed potential, although Custer wasn’t impressed with the results.

“Everything it healed already worked. So, you haven’t really changed maintenance. When a change is big enough to break my automation, the AI tool had a hard time fixing it,” said Custer. “It would surface really weird things. So, that to me is a little longer-term work for most enterprise applications.”

Are we there yet?
“Are We There Yet?” was the title of Custer’s research project and his conclusion is that autonomous testing isn’t ready for prime time in an enterprise environment.

“I’m not seeing anything I would recommend using for an enterprise customer yet. And the tools that I’ve tested didn’t perform any better. My method was to start with a three-year-old version of software, write some test cases, automate them, go through three years of upgrades and pay attention to the maintenance it took to do those upgrades,” said Custer. “When I did that, I found it didn’t save any maintenance time at all. Everybody’s talking about [AI], everyone’s working on it but there are some of them I’m suspicious about,” said Custer.

For example, one company requested the test script so they could parse it in order to understand it. When Custer asked how long it would take, the company said two or three hours. Another company said it would take two or three months to generate a logical map of a program.

“[T]hat doesn’t sound different from hiring a consultant to write your testing. AI/ML stuff has to actually make life easier and better,” said Custer.

Another disappointment was the lack of support for enterprise applications such as SAP and Oracle eBusiness Suite. 

“There are serious limitations on what technologies they support. If I were writing my own little startup web application, I would look at these tools. But if I were a Fortune 500 company, I think it’s going to take them a couple of years to get there,” said Custer. “The challenge is most of these companies aren’t selling a little add-on that you can add into your existing system. They’re saying change everything from one tool that works to my thing and that’s a huge risk.”