EMA (Enterprise Management Associates) recently released a report titled “Disrupting the Economics of Software Testing Through AI.” In this report, author Torsten Volk, managing research director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights five key categories of AI and six critical pain points of test automation that AI addresses.
We sat down with Torsten and talked about the report and his insights into the impact that AI is having in Software Testing:
Q: What’s wrong with the current state of testing? Why do we need AI?
Organizations reliant upon traditional testing tools and techniques fail to scale to the needs of today’s digital demands and are quickly falling behind their competitors. Due to increasing application complexity and time to market demands from the business, it’s difficult for software delivery teams to keep up. There is a growing need to optimize the process with AI to help root out the mundane and repetitive tasks and control the costs of quality that have gotten out of control.
Q: How can AI help and with what?
There are five key capabilities where AI can help: smart scrawling/Natural Language Process (NLP) driven test creation, self healing, coverage detection, anomaly detection, and visual inspection. The report I wrote highlights six critical pain points where these capabilities can help. For example: false positives, test maintenance, inefficient feedback loops, rising application complexity, device sprawl, and tool chain complexity.
Leading organizations have already adopted some level of self-healing and AI driven test creation but by far the most impactful is Visual Inspection (or Visual AI), which provides complete and accurate coverage of the user experience. It is able to learn and adapt to new situations without the need to write and maintain code-based rules.
Q: Are people adopting AI?
Yes, AI adoption is on the rise for many reasons, but for me, it’s not that people are not adopting AI – they’re adopting the technical capabilities that are based on AI. For example, people want the ability to do NLP-based test automation for a specific use case. People are more interested in the ROI gained from the speed and scalability of leveraging AI in the development process, and not necessarily how the sausage is being made.
Q: How does the role of the developer / tester change with the implementation of AI?
When you look at test automation, developers and testers need to make a decision about what belongs under test automation. How is it categorized, for example. Then all you need to do is basically set the framework for the AI to operate and provide it with feedback to continuously enhance its performance over time.
Once this happens, developers and testers are freed up to do more creative, interesting and valuable work by eliminating the toil of mundane or repetitive work – the work that isn’t valuable in and of itself but has to be done correctly every time.
For example, reviewing thousands of webpage renderings. Some of them have little differences, but they don’t matter. If I can have the machine filter out all of the ones that don’t matter and just highlight the few that may or may not be a defect, I’ve now cut my work down from thousands to a very small handful.
Auto-classification is a great example of being able to reduce your work. If you’re reducing repetitive work, it means you don’t miss things. Whereas, if I’m looking at the same, what looks like the same page each time, I might miss something. Whereas if I can have the AI tell me this one page is slightly different than the other ones you’ve been looking at, and here’s why, iit eliminates repetitive, mundane tasks and reduces the possibilities of error-prone outcomes.
Q: Do I need to hire AI experts or develop an internal AI practice?
The short answer is no. There are lots of vendor solutions available that give you the ability to take advantage of the AI, machine learning and training data already in place.
If you want to implement AI yourself, then you actually need people with two sets of domain knowledge: first, the domain that you want for the application of AI, but second, a deep understanding of the possibilities with AI and how you can chain those capabilities together. Oftentimes, that is too expensive and too rare.
If your core deliverable is not the deliverable of the AI but the deliverable of the ROI that the AI can deliver, then it’s much better to find a tool or service that can do it for you, and allow you to focus on your domain expertise. This will make life much easier because there will be a lot more people in a company that understand that domain and just a small handful of people that will only understand AI.
Q: You talk about the Visual Inspection capability being the highest impact – how does that help?
Training deep learning models to inspect an application through the eyes of the end user is critical to removing a lot of the mundane repetitive tasks that cause humans to be inefficient.
Smart crawling, self healing, anomaly detection, and coverage detection each are point solutions that help organizations lower their risk of blind spots while decreasing human workload. But, visual inspection goes even further by aiming to understand application workflows and business requirements.
Q: Where should I start today? Can I integrate AI into my existing Test Automation practice?
Yes – example of Applitools Visual AI.
Q: What’s the future state?
Autonomous testing is the vision for the future, but we have to ask ourselves, why don’t we have an autonomous car yet? It’s because today, we’re still chaining together models and models of models. But ultimately, where we’re striving to get to is AI is taking care of all of the tactical and repetitive decisions and humans are thinking more strategically at the end of the process, where they are more valuable from a business-focused perspective.
Thanks to Torsten for spending the time with us and if you are interested in reading the full report http://applitools.info/sdtimes .