Despite all the changes automated software testing has undergone in recent years, data shows that it still has some way to go to accelerate delivery of value and quality to the business, according to Forrester. 

However, while test automation coverage saw a notable dip during the pandemic, it has since rebounded last year, according to SmartBear’s State of Quality Testing 2022 report. 

Last year saw the amount of companies performing just manual tests at 11%, while that number dwindled to 7% this year, almost returning to pre-pandemic levels of 5% of all tests being performed completely manually. 

When looking at the different types of tests and how they are performed, over half of respondents reported using manual testing for usability and user acceptance tests.

Unit tests, performance tests, and BDD framework tests were highest among all automated testing. 

This year, the most time-consuming activity was performing manual and exploratory tests, jumping to 26% from 18% last year as the most time-consuming task. In the same time period, learning how to use test tools as the most time-consuming challenge with testing fell from 22% to just 8%.

In the Agile and DevOps realm, there are higher levels of automation versus those companies that are still in the waterfall stages, according to Diego Lo Giudice, VP, principal analyst at Forrester. This is inherent to DevOps because if most of the testing is manual, it’s just going to slow down the rest of the team. 

“With DevOps and all the automation going on around it, testing needs to be very high, it needs to be above 80%. You kind of see that only for a few companies or specific projects inside an organization, but if you look at the rest of the market, probably it’s less than 30%,” Lo Giudice said. “I would say we’ve made some progress, but there’s more automation that’s needed.”

In fact, some of the companies that are adopting agile or DevOps methods find that testing sometimes becomes the bottleneck to rapid delivery, according to Darrel Farris, manager of solutions engineering at mabl. Testing in DevOps must be integrated into the pipeline so developers aren’t throwing code over to QA that hasn’t been tested – especially if teams are deploying multiple times per week or month.

Some of the big challenges to implementing automated testing are that there’s a lack of skills and because test automation requires change within the organization. 

“So there are a number of changes regarding people, processes, and technology, it’s not just getting a tool. And automating tests, this is about organizing, testing completely in a different way,” Lo Giudice added. 

Challenges with getting automated testing just right 

“One of the challenges we see from people is that they’re fundamentally approaching this wrong. We’ve had some of our customers talk about this, how they had to change the way they were thinking and so that the kind of common obvious symptom that you see about this today is people saying ‘we had a whole bunch of manual testers and so we’ll build a whole strategy on recording what they do and playing it back and building from there. And this is just fundamentally the wrong approach,” said Arthur Hicken, chief evangelist at Parasoft. 

Another challenge is that automated tests can become incredibly time-consuming to maintain due to the sheer number of tests that are generated. 

“The largest issue is that once a person builds 300 tests, it becomes a full-time job to maintain those tests and you hit the ceiling,” Artem Golubev, CEO at testRigor said. “Coupled with the fact that budgets are limited, people just can’t build more automations.” 

Golubev added that this difficulty to maintain all automated tests is the main reason why the majority of tests are still executed manually today. Automating tests can also be futile if it’s focused on the wrong areas. 

“QA teams are spending 80% of their weeks maintaining scripts due to rapidly changing UIs, instead of focusing on growing functional test coverage or expanding the types of testing they are doing on their application, such as accessibility or performance testing,” mabl’s Farris said. 

“I believe the testing pyramid is built on false assumptions that have never been correct in the first place,” Golubev said. “In a perfect vacuum, of course this is how things work and there are maybe one or two companies which have done it that way. In a real scenario, it’s always been more of an hourglass shape of testing.” 

He explained that this is because engineers who mostly write unit tests are very unlikely to contribute to end-to-end tests, very few engineers would write integration tests since they are such a pain to maintain, and there would be a lot of end-to-end tests where you have people working on them full-time. 

While the integration test value is to make sure that the system integrates properly, it doesn’t matter if you enter and the system doesn’t work properly, Golubev continued. End-to-end tests are actually the ones covering integration because those tests are the test which will prove that your system is usable by your end users.

“Let’s say you’re logging into a banking application and they can’t transfer money from account A to account B, then it does not matter. Even if all your integration tests are green and all your unit tests pass through it, it’s completely useless,” Golubev said. “So the most important tests are end-to-end tests, only then can that system function as intended. And therefore end-to-end tests should be the bulk of the tests that are done.”

The best way to then optimize end-to-end tests to make them run faster is to prioritize because end-to-end tests will inherently be much slower than unit tests. 

“With every type of testing in the organization, people need to assess whether they need to really leverage automation? Is it worth it? Is it something that will be repeated over and over that changes continuously? If you have to run a test, the same test more than three, four times you start asking yourself, well, maybe I should automate this,” Forrester’s Lo Giudice said. “So I don’t think 100% is what customers will achieve and will keep it more towards 80% as I said.”

One of the most efficient ways to make sure that all testing resources are aligned correctly is to align as a team on a testing strategy by starting with the most critical test cases that will ensure a high quality application experience for users, according to mabl’s Farris. This can be done by taking on a few test cases at first, then layering in additional test cases over time.

One way to do this is to create a quality center of excellence or a “quality champion” in an organization. This person or group is a testing expert who can advise and coach everyone from developers to product owners on testing best practices, Farris explained. Some of the manual testing is changing too because of the increasing use of exploratory testing, Lo Giudice explained. This type of manual testing is where the tester sits down with the developer and they work out the issues together. The tester puts the application through certain scenarios, the developer sees the problems and tries to fix them, and they take about two hours a day like that. 

The structure around automated testing is shifting

Both companies’ attitudes towards testing and who gets involved have shifted. As testing becomes more federated, you no longer have a centralized team that does all the testing as an afterthought, according to Lo Giudice. 

Now, there are testers that are moving into the development teams and the product teams to get all of the testing done together. And so what remains in the central team is specialized testing resources that maybe choose the tools that define what the new practices would look like, whether that’s shifting testing to the left or suggesting test-driven development or behavior-driven development. 

The test center is now much smaller working in consulting with the teams but testers move into the team itself, Lo Giudice explained. 

“So the typical manual tester that used to put a test case in an Excel sheet and run it through the application looking at what the test case told him to do suddenly now finds himself with a tool that is quite technical where he needs to write code to automate what he was doing manually,” Lo Giudice said. To solve this, there’s a trend among vendors to raise the level of abstraction of the tools so that a manual tester or even a person on the business side can test using a low code testing tool. 

Then come the technologies, platforms, and tools because after all, an organization needs testing tools that are integrated into CI/CD pipelines with the rest of the development and delivery tools that integrate with CI servers effectively on the cloud. 

“The point really is that testing takes a village and it takes all these different personas in an organization: business tester, and a subject matter expert in testing who is technical but not a coder, and developers that also may be doing API testing, lower level infrastructure testing within their IDE at a very technical level,” Lo Giudice said. 

According to testRigor’s Golubev, the directors of QA will benefit the most from automated testing since they’ll be able to cover far more functionality faster than they ever could before. However, engineers, manual testers, and product management will also be able to benefit from automated testing tooling since they’ll be able to collaborate together on the same tool. 

Previously, it was companies in the banking and health sectors that have been getting automated testing right but now it’s organizations like Lenovo or Volkswagen that have these 

highly complex software test, build, and deploy systems that are the envy of anybody, Parasoft’s Hicken said. Ultimately, it’s one of the things companies are going to do because that is what they’re competitors are moving toward.

AI helps with various levels of testing 

When you send data of all the tests that passed: the log files, the bugs and feed them to AI it can start telling you what you need to test and how when there’s a change coming. It also helps to tell whether to run all of the tests or just to select the few ones that will be impacted by the change. 

There have been impressive improvements in the vision and computer vision space to enable visual testing, Lo Giudice said. There’s a tool out there that sees what the human eye does when looking at the application and will notice things that are going wrong. It can also do it on types of applications that move very fast that the human eye can’t capture. 

One can also teach AI to not fail tests in certain scenarios to help with self-healing. For example, tests can sometimes fail simply because an object moved on the screen differently on the same application on a browser, and then on a mobile device because the layout might change and it’s not necessarily a bug. And so one can now teach the algorithm to not fail the test even though it’s not in the same position because it can find the locator of that object in some other place, Lo Giudice explained. 

There are also AI models that help minimize tests to solve the maintenance problem.

“This is the idea of the AI guiding a person to create tests that are more stable. The Holy Grail is that you create a set of tests that maximize coverage, but minimize the number of tests so that you have less to maintain, and that they’re not brittle,” Hicken said. “You want tests that have proper levels of abstraction, so that you aren’t spending more on keeping them alive than you were in creating them in the first place.”

Also with error clustering, AI can help find and classify bugs in a way that a tester can quickly recognize the bug and can suggest the right developer to fix the bug to reduce mean time to repair. It can use data from production to find out what are the most frequently used features within that application. There’s even a tool that generates unit tests as you code, which Forrester refers to as the tester Turing bot. 

“AI can also support the execution of more stable tests. For example, tests running in the cloud can execute almost too fast, before your application is in a loaded state,” mabl’s Farris said. “It applies intelligence that can slow down or speed up the execution of your tests by automatically adjusting wait times.”

“So AI is infusing along the entire software development lifecycle. And testing is one of the stages where it’s actually more mature than any other stage of the development lifecycle,” Forrester’s Lo Giudice said. 

To read how providers are helping with automated testing initiatives, click here. To read the guide to automated testing tools, click here.