The drastic increase in volume of tests and the speed of software production has necessitated more efficient automated testing to handle repetitive tasks. The growing “shift-left” approach in Agile development processes has also pushed testing much earlier in the application life cycle.
“There is a challenge to testing in the sense that we need to do it more frequently, we need to do it for more complex applications, and we need to do it at a higher scale. This is not feasible without automation, so test automation is a must,” said Gartner senior director Joachim Herschmann, who is on the App Design and Development team.
In fact, in last year’s Forrester Wave: Global Continuous Testing Service Providers, found that traditional testing services don’t cut it for many organizations anymore: 20 of 25 reference customers said that they are adopting continuous testing (CT) services to support their Agile and DevOps initiatives within a digital transformation journey. Of those CT services, clients say automation is the most impactful and differentiating for delivering better software faster.
Investment in automated testing is expected to rise from $12.6 billion in 2019 to $28.8 billion by 2024, according to a report by B2B research company MarketsandMarkets.com.
The pandemic has also driven the importance of autonomous testing, as many companies realized the primary way to connect with consumers is through apps and digital applications, which in turn increased the amount of testing that needs to be done. The situation created a distributed workforce that needed to evolve the way they do testing. “With the effects of COVID, organizations had to execute a two year plan in two months,” said Mark Lambert, vice president of strategic initiatives at testing solutions provider Parasoft.
The current major shift that has occurred in autonomous testing is that it is no longer primarily driven by code but is actually driven by data, according to Herschmann. Anything that involves AI is driven by data.
These data sources include user stories or requirements that could stem from documents that say what an expected piece of functionality is. This requires natural language processing and technologies that can read the document and instruct the intent and then create a test case.
Other data points include existing test results in which users can identify patterns in their tests and see what their failure points were before.
Automated testing tools can also scan data or feedback that’s supplied in app stores or even social media to find information that the testers may have missed. “Very often there is a discrepancy between what the project manager envisions about a product versus how it’s used in reality. There’s a gap in testing there and now we can capture that,” said Herschmann.
Tooling can also generate unit tests automatically because it looks at GitHub where there are millions of projects, scans it, and trains the model based on that code. “By the way, writing unit tests is a task that developers hate, so if that can be done automatically, that’s great.”
Test automation also looks at log data such as web server logs or other log files and captures the information of how users have used the applications. This can then be used to to extract customer journeys and create common test scenarios based on that.
“We’re for the first time really tapping into these data sources, and we’re using that to enhance test automation. Where it all leads to is we’re finally getting to a point where the full life cycle of testing is actually increasingly automated,” Herschmann said.
As the move to Agile has increased, more companies implemented the test automation pyramid strategy with unit level testing at the base, where the largest amount of automated tests need to be done, followed by API testing, and lastly UI testing.
“There are a lot of excellent open source tools in the market when it comes to unit testing, but UI-based functional end-to-end testing is where there are a lot of challenges,” said Artem Golubev, the co-founder and CEO of testRigor, which offers behavior-based testing software.
Golubev stressed the need for an effective solution in this particular area. “These are difficult in particular because of stability and maintainability and it is difficult for teams to even build tests for this in the first place.”
Automated testing does not eliminate manual testing
Although companies have become increasingly aware of the speed and accuracy that comes with autonomous testing, this did not eliminate the necessity of manual testing at organizations.
“In general humans are really good at the creative domain, domain knowledge workflows, but they’re very bad at repetitive tasks. So if I can point a machine and tell it to go ahead and verify a particular use case, such as looking for specific numbers on a page and making sure they all match, that is a great job for a machine to do. It’s a bad job for a human, because as we really start to have more domain knowledge, those kinds of workflows bore us and we make mistakes,” Parasoft’s Lambert said.
Meanwhile, people add value in understanding how the application should be used and the problem that the application is trying to solve. Manual testing is a very valuable part of the process, Lambert explained.
Testing teams can also focus more on maintaining test scripts and increasing total test coverage. This has put some of the responsibility onto developers who are now working alongside testers to create test automation frameworks.
The expansion of AI in test automation has also led to tremendous benefits in test stability, maintainability, and being able to generate the tests, however, AI will not be able to replace humans in the near future when it comes to testing, according to Golubev.
“In cases of bot-based generated tests, it’s the AI that guides the bot through your application in order to be able to build proper end-to-end tests out of the box. There are also machine learning-based models that automatically assess if your page is rendered properly from an end user’s perspective,” Golubev said.
Golubev noted that AI will not be able to replace humans in the near future when it comes to testing.
“There is no such thing, and there won’t be in the next 20 years, something such as overarching AI. With the current models and how they work in 2020, the compute is just not there,” said Golubev.
Test automation drivers
Lambert said that there are three primary use cases that are driving the adoption and application of test automation: compliance, the need to accelerate delivery, and the reduction of operational outages.
“First, compliance is one of those things that’s non-negotiable and it really is a bottleneck at the end of the delivery pipeline,” Lambert said. “Whether it’s for following PII, GDPR, PCI, or countless other regulations, the organizations that implement compliance in an automated manner are the organizations that really succeed in really delivering on the second important use case: accelerating delivery, according to Lambert.
However, accelerating delivery is not just about the quantity of tests put out in the shortest period of time. This phase primarily has to be about focusing on the quality of automated tests.
“It’s not just about the level of test automation that’s the biggest problem. The biggest problem is actually a commitment to quality or a quality-first approach within organizations,” said Lambert.
“What we have seen is that management that makes a commitment can significantly reduce the number of outages that they have and accelerate delivery with confidence.”
The third major point of automated testing focuses on eliminating production outages and on doing continuous verification and validation as one goes through the process.
“If you’re just accelerating and not worrying about quality, that might work for the first release,, maybe the second release iteration, but certainly if you don’t have that in place, and if you don’t have the testing to check, you’re going to start failing as you move forward,” Lambert added. “If you build quality into your accelerated delivery process, then you could deliver with confidence and make sure you don’t have those production outages.”
When beginning with test automation, organizations not only have to figure out how to create their test automation, but identify what things to automate because not everything can be automated, according to Lambert. Then, organizations need ways, practices and technologies to help them with the creation process.
While many organizations getting started with test automation tend to look for the simplest approach by looking for tools that are easy to use and that can be plugged into the pipeline, Lambert said that it is best to think long term.
“One thing you have to look at is how is that going to scale? So a technology that you’re bringing in, or a capability that you’re bringing in might satisfy the use case that you have today, but is it going to satisfy the use case in six months time when you start expanding out to additional use cases or additional applications in your organization?” Lambert said.
Once the tests are created, organizations then have to consider how to maintain their tests.
“Say I get up and running and everything starts rolling great. And then the next sprint starts and that next sprint is not actually only introducing new functionality. It’s actually making changes to existing functionality. So my tests need to be maintained along with the underlying code and capabilities of the underlying application,” Lambert said.
This is where testing functionalities such as self-healing come in to make sure that everything doesn’t collapse in the middle of a sprint. This functionality stops the continuous integration process from failing, and then also giving users ways of easily refactoring existing test cases so that they don’t have to throw them away and start again.
“As I’m moving further through my development life cycle, my number of tests grow, and this is where test execution becomes critical. So you have to start looking at your test suites and saying, okay, what capabilities are available for me that can optimize my test execution to focus on the key business risks and optimize my test suites. This is so that I can get rapid feedback inside of a sprint and can continue accelerated delivery from sprint to sprint,” said Lambert.
Other key functionalities to accelerate execution and feedback include traceability, which ensures that the verification and validation of the product is complete. Also important is integration with the CI/CD pipeline.
“What I want to achieve is not more and more tests. What I actually want is as few tests as I possibly can because that will minimize the maintenance effort, and still get the kind of risk coverage that I’m looking for,” Gartner’s Herschmann said.
Herschmann explained that while test automation solves one problem, it can create another if not utilized properly.
“I’m doing everything manually and I’m using automation now to accelerate that. Well that solves my problem of not able to run enough tests,” Herschmann said. “The new problem that I’ve potentially created now is that with all the tests that I run, I can no longer actually look at all of the test results and make sense of what I’m seeing here. So that’s why the test insights part, as an example, is now becoming the focus. I need something that helps me to do this in an automated fashion so that the result is that now I’m notified of the specific instances of where a test has failed or the patterns of where they have failed.”