The application of artificial intelligence (AI) and machine learning (ML) in software testing is both lauded and maligned, depending on who you ask. It’s an eventuality that strikes balanced notes of fear and optimism in its target users. But one thing’s for sure: the AI revolution is coming our way. And, when you thoughtfully consider the benefits of speed and efficiency, it turns out that it is a good thing. So, how can we embrace AI with positivity and prepare to integrate it into our workflow while addressing the concerns of those who are inclined to distrust it?

Speed bumps on the road to trustville

Much of the resistance toward implementing AI in software testing comes down to two factors: a rational fear for personal job security and a healthy skepticism in the ability of AI to perform tasks contextually as well as humans. This skepticism is primarily based on limitations observed in early applications of the technology. 

To further promote the adoption of AI in our industry, we must assuage the fears and disarm the skeptics by setting reasonable expectations and emphasizing the benefits. Fortunately, as AI becomes more mainstream — a direct result of improvements in its abilities — a clearer picture has emerged of what AI and ML can do for software testers; one that is more realistic and less encumbered by marketing hype.

First things first: Don’t panic

Here’s the good news: the AI bots are not coming for our jobs. For as long as there have been AI and automation testing tools, there have been dystopian nightmares about humans losing their place in the world. Equally prevalent are the naysayers who scoff at such doomsday scenarios as being little more than the whims of science fiction writers.

The sooner we consider AI to be just another useful tool, the sooner we can start reaping its benefits. Just as the invention of the electrical screwdriver has not eliminated the need for workers to fasten screws, AI will not eliminate the need for engineers to author, edit, schedule and monitor test scripts. But it can help them perform these tasks faster, more efficiently, and with fewer distractions.

Autonomous software testing is simply more realistic — and more practical —  when viewed in the context of AI working in tandem with humans. People will remain central to software development since they are the ones who define the boundaries and potential of their software. The nature of software testing dictates that the “goal posts” are always shifting as business requirements are often unclear and constantly changing. This variable nature of the testing process demands continued human oversight.

The early standards and methodologies for software testing (including the term “quality assurance”) come from the world of manufacturing product testing. Within that context, products were well-defined with testing far more mechanistic compared to software whose traits are malleable and often changing. In reality, however, software testing is not applicable to such uniform, robotic methods of assuring quality. 

In modern software development, there are many things that can’t be known by developers. There are too many changing variables in the development of software that require a higher level of decision-making than AI can provide. And yet, while fully autonomous AI is unrealistic for the foreseeable future, AI that supports and extends human efforts at software quality is still a very worthwhile pursuit. Keeping human testers in the mix to consistently monitor, correct, and teach the AI will result in an increasingly improved software product.

The three stages of AI in software testing

Software testing AI development essentially has three stages of development maturity:

  • Operational Testing AI
  • Process Testing AI
  • Systemic Testing AI

Most AI-enabled software testing is currently performed at the operational stage. Operational testing involves creating scripts that mimic the routines human testers perform hundreds of times. Process AI is a more mature version of Operational AI with testers using Process AI for test generation. Other uses may include test coverage analysis and recommendations, defect root cause analysis and effort estimations, and test environment optimization. Process AI can also facilitate synthetic data creation based on patterns and usages. 

The third stage, Systemic AI, is the least tenable of the three owing to the enormous volume of training it would require. Testers can be reasonably confident that Process AI will suggest a single feature or function test to adequately assure software quality. With Systemic AI, however, testers cannot know with high confidence that the software will meet all requirements in all situations. AI at this level would test for all conceivable requirements – even those that have not been imagined by humans. This would make the work of reviewing the autonomous AI’s assumptions and conclusion such an enormous task that it would defeat the purpose of working toward full autonomy in the first place.

Set realistic expectations

After clarifying what AI can and cannot do, it is best to define what we expect from those who use it. Setting clear goals early on will prepare your team for success. When AI tools are introduced to a testing program, it should be presented as a software project that has the full support of management with well-defined goals and milestones. Offering an automated platform as an optional tool for testers to explore at their leisure is a setup for failure. Without a clear directive from management and a finite timeline, it is all too easy for the project to never get off the ground. Give the project a mandate and you’ll be well on your way to successful implementation. You should aim to be clear about who is on the team, what their roles are, and how they are expected to collaborate. It also means specifying what outcomes are expected and from whom. 

Accentuate the positive

Particularly in agile development environments, where software development is a team sport, AI is a technology that benefits not only testers but also everyone on the development team. Give testers a stake in the project and allow them to analyze the functionality and benefits for themselves. Having agency will build confidence in their use of the tools, and convince them that AI is a tool for augmenting their abilities and preparing them for the future.

Remind your team that as software evolves, it requires more scripts and new approaches for testing added features, for additional use patterns and for platform integrations. Automated testing is not a one-time occurrence. Even with machine learning assisting in the repairing of scripts, there will always be opportunities for further developing the test program in pursuit of greater test coverage, and higher levels of security and quality. Even with test scripts that approach 100 percent code execution, there will be new releases, new bug fixes, and new features to test. The role of the test engineer is not going anywhere, it is just evolving.

Freedom from the mundane

It is no secret that software test engineers are often burdened with a litany of tasks that are mundane. To be effective, testing programs are designed to audit software functionality, performance, security, look and feel, etc. in incrementally differing variations and at volume. Writing these variations is repetitive, painstaking, and—to many—even boring. By starting with this low-hanging fruit, the mundane, resource-intensive aspects of testing, you can score some early wins and gradually convince the skeptics of the value of using AI testing tools. 

Converting skeptics won’t happen overnight. If you overwhelm your team by imposing sweeping changes, you may be setting yourself up for failure. Adding AI-assisted automation into your test program greatly reduces the load of such repetitive tasks, and allows test engineers to focus on new interests and skills.

For example, one of the areas where automated tests frequently fail is in the identification of objects within a user interface (UI). AI tools can identify these objects quickly and accurately to bring clear benefit to the test script. By focusing on such operational efficiencies, you can make a strong case for embracing AI. When test engineers spend less time performing routine debugging tasks and more time focusing on strategy and coverage, they naturally become better at their jobs. When they are better at their jobs, they will be more inclined to embrace technology. 

In the end, AI is only as useful as the way in which it is applied. It is not an instantaneous solution to all our problems. We need to acknowledge what it does right, and what it does better. Then we need to let it help us be better at our jobs. With that mindset, test engineers can find a very powerful partner in AI and will no doubt be much more likely to accept it into their workflow.