Manual testing is still the biggest part of the application testing industry, even though functional test automation has been around for over 25 years. Overviews from IDC to Gartner support this, showing that roughly 85-90 percent of QA testing globally is still manual.
“This would seem to be a disappointment, and the reason is the automation we’ve had isn’t true automation. What it’s been to date, is trying to capture what a business analyst says the application should do. It gets written out in a spreadsheet and handed to QA who usually codes it out for an API, UX, or HTTP level test. The scripts are typically handled by multiple testers who rewrite and de-bug them. After all this creation and maintenance, you might finally be able to run them,” says Kevin Surace, CEO of Appvance, as he describes the testing predicament. “Then when you run them you say, ‘This is great, I’ve got three hundred scripts, I’ve got 30 percent code coverage.’ Then there’s a new build and all the scripts are broken. You’re back where you started.”
In this scenario, the amount of resources it takes to automate almost rivals the amount of resources to do it manually. In many cases, it’s faster to just do it manually. The downside to manual testing is that it can result in a significant percentage of “false positives.” Surace describes quality testing as unit, functional, regression, performance, load, smoke, and security testing, and thinks false positives happen because testers and QA believe they followed what the analysts wrote. But of course, they didn’t. The result is a fair amount of human error. It’s painful. It’s time-consuming, and it’s expensive considering roughly 35 percent of an enterprise IT budget is spent on it, according to Capgemini. Ninety percent of that 35 percent is spent on script creation and maintenance, which is human time.
Enter AI
Artificial Intelligence is defined as the capability of a machine to imitate intelligent human behavior. How one gets there is the fodder of many conversations but it’s less important than the result. There’s been much discussion with people looking at applying AI to analysis of failures and better image recognition of objects, but none of those activities take up 90 percent of the team’s time. In order for AI to be of significant value it has to be applied to reducing human time. “It’s a fundamental tool of what you first do in AI,” points out Surace. Appvance’s approach to AI has thus been to figure out what humans do and dramatically augment that with AI because that’s where the money is. Surace says, “If we say that the pain point of QA is cost, time, and userlevel accuracy, the only way to impact these is to address the elephant in the room, and that is the 90 percent, and that is what we have done.”
AI-driven test automation is here
Silicon Valley-based Appvance was founded in 2012. Its Unified Test Platform (UTP) combines multiple test types with a unique write-once methodology. UTP has been out for about a year and the new AI capabilities were launched in September. It took a pragmatic approach to AI technology, including Big Data analysis, expert systems and machine learning in UTP. Logically, the team reasoned that for automatically generated tests to be impactful, it had to consider what data would be required. At the very least, these 7 areas had to be addressed:
- Knowledge of how users are using (ideally) or will use an application (user analytics)
- Expected results and/or responses (how do we know a pass versus a fail?)
- How does the application function and what is its purpose?
- How does one form requests that will not be rejected by the server, even if the server is now a different server (for instance the QA server instead of the production server?)
- What data is required for valid forms (such as credentials)?
- How can one create correlations to take server responses and place them back into future requests (such as session ID’s)?
- How does one handle changes in a new build versus the old build?
Surace admits, “The problem was not a simple one. One must have sources for, or the technology to create, all of this data, then learn from the data, then utilize that knowledge to create new tests. And of course, repurpose those use cases into various test types. It is truly a Big Data problem.”
Appvance can ingest server logs as one source, or breadcrumbs, to learn about usage of users on production systems and analysts on QA servers. All servers have production logs that are often ignored but very useful to understand usage patterns en masse. UTP also uses server log data to understand expected results based on prior results.
The technology uses several methods to better understand how an application works, how to form requests and expected responses. These methods may include a tester clicking through the application one time for each build. This generates the Master Key File. Another more sophisticated method uses algorithms to do the same. UTP needs valid test data provided by the test team or a connected database. Provided the data is in the right place, the system can make use of it.
To resolve correlation, the system automatically creates errors on hidden runs then searches for matching substitutes that will pass. Automatic correlation is necessary to send accepted requests to the server on subsequent runs.
“The machine learning algorithms take in all this data, and generate thousands of valid test scripts (essentially executable code) in a matter of seconds that can be run and analyzed immediately. This can free up manual testers as well as QA professionals or developers to focus on other less mundane tasks than writing scripts. These scripts better represent what users do and can provide nearly 100% code coverage, in seconds rather than weeks” said Surace.
The AI technology has been in beta use under NDA by several large companies since June. Surace says, “If there’s ever been a complete breakthrough in software QA in my career, this is it. It’s the first blush of how AI is going to impact coding and testing going forward.”