Test automation has undergone quite an evolution in the decades since it first became possible. 

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges.

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations “can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,” he said. 

RELATED CONTENT:
A guide to automated testing tools
Take advantage of AI-augmented software testing

The second area is coverage. Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, “It’s got to be the right kind of testing. If you Google test types, it comes back with several hundred kinds of testing.”

How do you know when you’ve tested enough? “If your experience is anything like mine,” Parker said, “the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for. So how do we get ahead of that?”

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications. “The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking. But that presents even more challenges now for the automation engineer,” Parker said.

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing.

 “I think if you tried to ask anyone, ‘are you doing DevOps? Are you doing Agile?’ Everyone will say yes,” said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software. “And everyone we speak to says, ‘yes, we’re already doing automation.’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script.’ So I think, yes, they are doing something.”

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said, 

“They’ve got hundreds of people involved to keep this to a point where they can run thousands of scripts.” But in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had. “Is over-testing a problem?” he posited. “There’s a continuous view that we’re in a bit of a tech crunch at the moment. We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure. And now, just saying I’ve got 5,000 scripts, kind of means nothing. Why don’t you have 6,000 or 10,000? You have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feel-good feeling that I’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.”

RELATED CONTENT:
How Cox Automotive found value in automated testing
Accessibility testing
Training the model for testing

Testing at the speed of DevOps

One effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild.

Arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up. “That’s as forward thinking as the people who presume that AI can generate all your code and all your tests now,” he said. “The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse. The promises are really great. But it’s also a trust assumption that the people who built those pieces did a good job. We don’t yet have certification standards for components that help us understand what the quality of this component is.”

He suggested the industry create a bill of materials that includes testing. “This thing was built according to these standards, whatever they are, and tested and passed. And the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.”

Appvance’s Parker suggests doing testing as close to code delivery as possible. “If you remember when you went to test automation school, we were always taught that we don’t test

the code, we test against the requirements,” he said. “But the modern technologies that we use for test automation require us to have the code handy. Until we actually see the code, we can’t find those [selectors]. So we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible. It would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy Grail and be testing against requirements. So for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel.”

As Parker noted earlier, there are hundreds of types of testing tools on the market – for functional testing, performance testing, UI testing, security testing, and more. And Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together. “In an old school traditional environment, you might have an IT department where developers write some tests. And then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient. So having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the performance team picks up a functional scenario. And they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. So I think that having a good platform that does a lot of this can help you.”

Coverage: How much is enough?

Fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. But determining what those flows are is the hard part, he said. “We have reports within mabl that we try to make easy for our customers to understand. Here are all the different pages that I have on my application. Here’s the complexity of each of those. And here are the tests that have touched on those, the elements on those pages. So at least you can see where you have gaps.”

It is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce.

“If the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly… and maybe that’s fine,” he said.

Parker said AI can help with coverage when it comes to testing every conceivable user experience. “The problem there,” he said, “is this word conceivable, because it’s humans conceiving, and our imagination is limited. Whereas with AI, it’s essentially an unlimited resource to follow every potential possible path through the application. And that’s what I was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script. We need to bring AI so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible.”

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful.  “If I turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones. In and of itself, coverage is not a great goal. But the lack of coverage is certainly indicative of insufficient testing. So my pet peeve is that some people say, it’s not how much you test, it’s what you test. No. You need to have as broad code coverage as you can have.”

The all-important user experience

It’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. “Unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts. That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical…customer support, customer success.  They are not typically the ones who can understand GitHub and code and how to write it and update that – or even understand what was tested. So we think low code can bridge this gap. That’s what we do.”

Where is this all going?

The use of generative AI to write tests is the evolution everyone wants to see, Mattos said. “We’ll get better results by combining human insights. We’re specifically working on AI technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actually important for the user. What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?”

“We want to combine that with the machine,” he continued. “So the human understands the customer, the machine can replicate and create several different scenarios that traverse those. But of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us. We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.”

Keysight’s Wright said the company is seeing value in generative AI capabilities. “Is it game changing? Yes. Is it going to get rid of manual testers? Absolutely not. It still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid. If it suggests that I should test (my application) with every single language and every single country, is it really going to find anything I might do? But in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary.”

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization. “We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation. And I think a lot of that has been driven by open testing.” 

“As easy as it should be to get your test,” he continued, “you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes. That way, when you start shifting up, and shifting the quality conversation, you can look at metrics. And the shift has gone from how many tests am I running, to what are the business-oriented metrics? What’s the confidence rating? Are we going to hit the deadlines? So we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.”