Getting one’s hands on automated tests for the first time is like being given the keys to a Ferrari. And YouTube is chock-full of videos on what happens when someone gets too comfortable too soon in a Ferrari.

Automated tests are fast, but only in the direction that you point them to. And having a lot of them can easily cause a traffic jam so it’s important to first make sure that they are applied in the right areas and in the right way. 

“What I want to achieve is not more and more tests. What I actually want is as few tests as I possibly can because that will minimize the maintenance effort, and still get the kind of risk coverage that I’m looking for,” said Gartner senior director Joachim Herschmann, who is on the App Design and Development team. 

RELATED CONTENT:
How these solution providers support automated testing
A guide to automated testing tools

To get started with automated testing, organizations need to first look at where their tests will deliver the most value to avoid test sprawl and to prevent high maintenance costs. 

“The warm, fuzzy feeling that you’ve got a thousand automated tests per week doesn’t really tell you anything from a risk perspective with risk-based testing,” said Arthur Hicken, the chief evangelist at Parasoft. “So I think this kind of approach to doing value-driven automation as to what’s got the most value and what kind of confidence we need, what kind of coverage we need is important.”

Organizations need to factor in what it costs to create a test and what it costs to maintain a test because often the maintenance winds up costing a lot more than the creation. 

One must also factor in what it costs to execute a test in terms of time. With Big Bang releases a couple of times a year, creating tests is not such a big issue, but if a company is used to rolling out  weekly updates such as with mobile apps it’s really critical to be able to narrow and focus the automation on exactly the right set of tests. 

With a value-driven test automation strategy, organizations can identify full-stack tests that only cover backend business logic and that can be tested more efficiently through API-level integration (or even unit) tests. They can also identify bottlenecks with dependencies that can be virtualized for more efficient testing and automation, according to Broadcom in a blog post

The testers might decide not to automate some tests that they thought were ideal for automation, because having them performed by testers turns out to be more efficient.

Test at the API level

One way to tackle the complexity that comes with automated testing is to test at the API level rather than the UI, according to Hicken.

UI testing, which ensures that an application is performing the right way from the user perspective is notoriously brittle. 

“[UI testing] is certainly the easiest way to get started in the sense that it’s easy to look at a UI and understand what you need to do like start poking things, but at some point, that becomes very hard to continue,” Hicken said. “It’s hard to make boundary cases happen or to simulate error conditions. Also, fundamentally UI testing is the hardest to debug, because you have too much context and it’s the most brittle to maintain.” 

Meanwhile at the unit level, the automated tests are pretty fast to execute and create and are easy to understand and maintain. After unit testing, one can add the simplest functional tests that they have and then go and backfill with the UI. Now, they can make sure that actual business cases and user stories occur and they can implement these tests against the business logic to get the proper blend of testing, Hicken explained.

“It’s not really that top down approach of if I see a system and automate that system, it’s actually now from a bottom up focus of well in which people are approaching automation at an enterprise scale and asking what’s the blueprint or pattern that we’re trying to do?,” Jonathon Wright, the chief technology evangelist of test automation at Keysight said. “It’s incredibly complex states and the devil’s in the details…they’re asking how do you test those things with realistic data rather than a happy path?” 

Wright explained that happy path testing just won’t cut it anymore because people are testing systems upstream and downstream with all the same kind of data and it all works out in the happy path kind of scenario. Even when people are doing contract testing where each one of the systems is tested end-to-end from an API perspective, people are just using one user with one account with one something and then, of course, it works. But this methodology misses the point, according to Wright. 

“Because people are testing in isolation, they’re also testing their shim or stub or their service virtualization component using Wireshark, so that they’re not actually testing against the real API. So they exclude a lot of things by just locking them out,” Wright added. 

Focus on real-user interactions

A good way to set up automated tests is to focus on how real users are interacting with the systems and how the behavior of those systems are being used. 

“It’s quite scary, because obviously, its perception of what the system does, but actually what the system is doing in the live environment and how the customers are using it. But you kind of assume that they’re going to use it in a particular way, when actually the behavior will change. And that will change weekly and monthly,” Wright said. 

That’s why testers can set up a digital twin of the system as it currently is, and then overlay that with what they thought the system was based on. 

“There’s a different type of behavior mapping; it’s learning from the right hand side this kind of shift right to inform the shift left blueprint model of the system which I think actually helps accelerate everything because you don’t need to create an activity,” Wright added. “You can create it all from real users. You just take their exact journey and then within a matter of minutes, we can actually generate all the automation artifacts with it.”

Teams must then slice the user journeys into smaller, more meaningful pieces and automate against those smaller journeys without going too deep. It’s important that they can automate every clique and not merge too many user journeys together in a single test resulting in multiple hundred step tests, according to Gev Hovsepyan, the head of product at mabl. 

That initial setup of the environment proves to be an interesting discussion between quality engineers and software engineers and in the organization as a whole. “I think that initial configuration, especially when onboarding the test automation platform, becomes an important discussion point, because the way you set it up, is going to define how scalable that approach is,” Hovsepyan said. 

The role of service virtualization

The key to unlocking continuous testing is having an available, stable, and controllable test environment. Service virtualization makes it possible to simulate a wide range of constraints in test environments, whether due to unavailability or uncontrollable dependencies. 

The behaviors of various components are mimicked with mock responses that can provide an environment almost identical to a live setting. 

“Service virtualization is an automation tester’s best friend. It can help to resolve roadblocks and allow teams to focus on the tests themselves instead of worrying about whether or not they can get access to a certain environment or third party service,” Amit Bhoraniya, the technical lead at Infostretch wrote in a blog post

Organizations can also prevent having too many automated tests by having a unified platform and by ensuring quality earlier on in the pipeline. 

Companies are looking for an approach that not only helps them with functional testing, but helps them with non-functional testing and scaling across different teams on a single platform, and having visibility across the quality of their product across different teams across different testing domains, according to mabl’s Hovsepyan. 

A unified approach helps because the responsibilities for testing and quality assurance are often shared within an organization, and that varies based on their DevOps maturity. 

At more mature organizations in terms of DevOps adoption, there is often a center of excellence of quality engineering, where they deploy the practices and then everyone in the organization participates in assuring the quality, including engineers, or developers.

Organizations that are still somewhere early or in the middle of their journey of DevOps adoption have a significant amount of ownership of quality assurance and quality automation at the team level. And these teams have added quality engineers, and they are responsible for ensuring the quality through automation as well as for manual testing. 

This collaborative effort to test automation can help ensure that the developers and testers both know how these tests should be created and maintained. 

“Test automation is one of those things that when it’s done it’s a huge enabler and can really give your business a boost,’ Hicken said. “And when it’s done wrong, it’s an absolute nightmare.”

AI can help with test creation and maintenance

The introduction of AI and ML assisting into automated testing makes it easier to shift quality left by providing earlier defect remediation and reducing risk for deliveries. 

By collecting and incorporating test data, machine learning can effectively update and interpret certain software metrics that show the state of the application under test. Machine learning can also quickly gather information from large amounts of data and point developers or testers right to the performance problem. 

AI is also excellent at finding those one-in-a-million anomalies which testers might just not catch, according to Jonathon Wright, chief technology evangelist at testing company Keysight. 

In the blog,  “What is Artificial Intelligence in Software Testing?,” Igor Kirilenko, Parasoft’s VP of Development, explains that these AI capabilities “can review the current state of test status, recent code changes, code coverage, and other metrics, decide which tests to run, and then run them,” while machine learning (ML) “can augment the AI by applying algorithms that allow the tool to improve automatically by collecting the copious amounts of data produced by testing.”

By 2025, 70% of enterprises will have implemented an active use of AI-augmented testing,

up from 5% in 2021, according to Gartner’s “Market Guide for AI-Augmented Software testing Tools.” Also by 2025, organizations that ignore the opportunity to utilize AI-augmented testing will spend twice as much effort on testing and defect remediation compared with their competitors that take advantage of AI. 

AI-augmented software testing tools can provide capabilities for test case and test data generation, test suite optimization and coverage detection, test efficacy and robustness, and much more. 

“AI can change the game here, because even in the decades that we’ve had test automation tools, there’s very little that it offered you regarding any guidance like how do I determine the test cases that I need?” Herschmann said.