Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests. 

Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of testing involves complex environments. 

For the study, Forrester conducted an online survey in December 2021 that involved 406 test operations decision-makers at organizations in North America, EMEA, and APAC to evaluate current testing capabilities for electronic design and development and to hear their thoughts on investing in automation.

The complexity of testing has increased the number of tests, according to 75% of the respondents. Sixty-seven percent of respondents said the time to complete tests has risen too.

Challenges with automated testing 

Those that do utilize automated testing often have difficulty making the tests stable in these complex environments, according to Paulina Gatkowska, head of quality assurance at STX Next, a Python software house. 

One such area where developers often find many challenges is in UI testing in which the tests work like a user: they use the browser, click through the application, fill fields, and more. These tests are quite heavy, Gatkowska continued, and when a developer finishes their test on a local environment, sometimes it fails in another environment, or only works 50% times, or a test works the first week, and then starts to be flaky. 

“What’s the point of writing and running the tests, if sometimes they fail even though there is no bug? To avoid this problem, it’s important to have a good architecture of the tests and good quality of the code. The tests should be independent, so they don’t interfere with each other, and you should have methods for repetitive code to change it only in one place when something changes in the application,” Gatkowska said. “You should also attach great importance to ‘waits’ – the conditions that must be met before the test proceeds. Having this in mind, you’ll be able to avoid the horror of maintaining flaky tests.”

Then there are issues with the network that can impede automated tests, according to Kavin Patel, founder and CEO of Convrrt, a landing page builder. A common difficulty for QA teams is network disconnection, which makes it difficult for them to access databases, VPNs, third-party services, APIs, and certain testing environments, because of shaky network connections, adding needless time to the testing process. The inability to access virtual environments, which are typically utilized by testers to test programs, is also a worry. 

Because some teams lack the expertise to implement automated testing, manual testing is still used as a correction for any automation gaps. This creates a disconnect with the R&D team, which is usually two steps ahead, according to Kenny Kline, president of Barbend, an online platform for strength sports training and nutrition.

“To keep up with them, testers must finish their cycles within four to six hours, but manual testing cannot keep up with the rate of development. Then, it is moved to the conclusion of the cycle,” Kline said. “Consequently, teams must include a manual regression, sometimes known as a stabilization phase, at the end of each sprint. They extend the release cadence rather than lowering it.”

Companies are shifting towards full test automation 

Forrester’s research also found that 45% of companies say that they’re willing to move to a fully automated testing environment within the next three years to increase productivity, gain the ability to simulate product function and performance, and shorten the time to market. 

The companies that have implemented automated testing right have reaped many rewards, according to Michael Urbanovich, head of the testing department at a1qa, an international quality assurance company. The ones relying on robotic process automation (RPA), AI, ML, natural language processing (NLP), and computer vision for automated testing have attained greater efficiency, sped up time to market, and freed up more resources to focus on strategic business initiatives. RPA alone can lower the time required for repetitive tasks up to 25%, according to research by Automation Alley. 

For those looking to gain even more from their automation initiatives, a1qa’s Urbanovich suggests looking into continuous test execution, implementing self-healing capabilities, RPA, API automation, regression testing, and UAT automation. 

Urbanovich emphasized that the decision to introduce automated QA workflows must be conscious. Rather than running with the crowd to follow the hype, organizations must calculate ROI based on their individual business needs and wisely choose the scope for automation and a fit-for-purpose strategy. 

“To meet quality gates, companies need to decide which automated tests to run and how to run them in the first place, especially considering that the majority of Agile-driven sprints last for up to only several weeks,” Urbanovich said. 

Although some may hope it were this easy, testers can’t just spawn automated tests and sit back like Paley’s watchmaker gods. The tests need to be guided and nurtured. 

“The number one challenge with automated testing is making sure you have a test for all possibilities. Covering all possibilities is an ongoing process, but executives especially hear that you have automated testing now and forget that it only covers what you actually are testing and not all possibilities,” said David Garthe, founder of Gravyware, a social media management tool. “As your application is a living thing, so are the tests that are for it. You need to factor in maintenance costs and expectations within your budget.” 

Also, just because a test worked last sprint, doesn’t mean it will work as expected this sprint, Garthe added. As applications change, testers have to make sure that the automated tests cover the new process correctly as well. 

Garthe said that he has had a great experience using Selenium, referring to it as the “gold standard” with regard to automated testing. It has the largest group of developers that can step in and work on a new project. 

“We’ve used other applications for testing, and they work fine for a small application, but if there’s a learning curve, they all fall short somewhere,” Garthe said. “Selenium will allow your team to jump right in and there are so many examples already written that you can shortcut the test creation time.”

And, there are many other choices to weave through to start the automated testing process.

“When you think about test automation, first of all you have to choose the framework. What language should it be? Do you want to have frontend or backend tests, or both? Do you want to use gherkin in your tests?,” STX Next’s Gatkowska said. “Then of course you need to have your favorite code editor, and it would be annoying to run the tests only on your local machine, so it’s important to configure jobs in the CI/CD tool. In the end, it’s good to see valuable output in a  reporting tool.”

Choosing the right tool and automated testing framework, though, might pose a challenge for some because different tools excel at different conditions, according to Robert Warner, Head of Marketing at VirtualValley, a UK-based virtual assistant company.

“Testing product vendors overstate their goods’ abilities. Many vendors believe they have a secret sauce for automation, but this produces misunderstandings and confusion. Many of us don’t conduct enough study before buying commercial tools, that’s why we buy them without proper evaluation,” Warner said. “Choosing a test tool is like marrying, in my opinion. Incompatible marriages tend to fail. Without a good test tool, test automation will fail.”

AI is augmenting the automated testing experience

In the next three years 52% of companies that responded to the Forrester report said they would consider using AI for integrating complex test suites.

The use of AI for integrated testing provides both better (not necessarily more) testing coverage and the ability to support agile product development and release, according to the Forrester report.

Companies are also looking to add AI for integrating complex test suites, an area of test automation that is severely lacking, with only 16% of companies using it today. 

a1qa’s Urbanovich explained that one of the best ways to cope with boosted software complexity and tight deadlines is to apply a risk-based approach. For that, AI is indispensable. Apart from removing redundant test cases, generating self-healing scripts, and predicting defects, it streamlines priority-setting. 

“In comparison with the previous year, the number of IT leaders leveraging AI for test prioritization has risen to 43%. Why so?” Urbanovich continued, alluding to the World Quality Report 2021-2022. “When you prioritize automated tests, you put customer needs FIRST because you care about the features that end users apply the most. Another vivid gain is that software teams can organize a more structured and thoughtful QA strategy. Identifying risks makes it easier to define the scope and execution sequence.”

Most of the time, companies are looking to implement AI in testing to leverage the speed improvements and increased scope of testing, according to Kevin Surace, CTO at Appvance, an AI-driven software testing provider

“You can’t write a script in 10 minutes, maybe one if you’re a Selenium master. Okay, the machine can write 5,000 in 10 minutes. And yes, they’re valid. And yes, they cover your use cases that you care about. And yes, they have 1,000s of validations, whatever you want to do. And all you did was spend one time teaching it your application, no different than walking into a room of 100 manual testers that you just hired, and you’re teaching them the application: do this, don’t do this, this is the outcome, these are the outcomes we want,” Surace said. “That’s what I’ve done, I got 100 little robots or however many we need that need to be taught what to do and what not to do, but mostly what not to do.”

QA has difficulty grasping how to handle AI in testing 

Appvance’s Surace said that the overall place of where testing needs to go is to be completely hands off from humans.

“If you just step back and say what’s going on in this industry, I need a 4,000 times productivity improvement in order to find essentially all the bugs that the CEO wants me to find, which is find all the bugs before users do,” Surace said. “Well, if you’ve got to increase productivity 4,000 times you cannot have people involved in the creation of very many use cases, or certainly not the maintenance of them. That has to come off the table just like you can’t put people in a spaceship and tell them to drive it, there’s too much that has to be done to control it.”  

Humans are still good at prioritizing which bugs to tackle based on what the business goals are

because only humans can really look at something and say, well, we’ll just leave it, it’s okay, we’re not gonna deal with it or say this is really critical and push it to the developers side to fix it before release, Surace continued. 

“A number of people are all excited about using AI and machine learning to prioritize which tests you should run, and that entire concept is wrong. The entire concept should be, I don’t care what you change in application, and I don’t understand your source code enough to know the impacts and on every particular outcome. Instead, I should be able to create 10,000 scripts and run them in the next hour, and give you the results across the entire application,” Surace said. “Job one, two, and three of QA is to make sure that you found the bugs before your users do. That’s it, then you can decide what to do with them. Every time a user finds a bug, I can guarantee you it’s in something you didn’t test or you chose to let the bug out. So when you think about it, that way users find bugs and the things we didn’t test. So what do we need to do? We need to test a lot more, not less.”

A challenge with AI is that it is a foreign concept to QA people so teaching them how to train AI is a whole different field, according to Surace. 

First off, many people on the QA team are scared of AI, Surace continued, because they see themselves as QA people but really have the skillset of a Selenium tester that writes Selenium scripts and tests them. Now, that has been taken away similar to how RPA disrupted many industries such as customer support and insurance claims processing. 

The second challenge is that they’re not trained in it.

“So one problem that we see that we have is you explain how the algorithms work?,” Surace said. “In AI, one of the challenges we have in QA and across the AI industry is how do we make people comfortable that here’s a machine that they may not ever be able to understand. It’s beyond their skillset to actually understand the algorithms at work here and why they work and how neural networks work so they now have to trust that the machine will get them from point A to point B, just like we trust the car gets from point A to point B.”

However, there are some areas of testing in which AI is not as applicable, for example, in a form-based application where there is nothing else for the application to do than to guide you through the form such as in a financial services application. 

“There’s nothing else to do with an AI that can add much value because one script that’s data-driven already handles the one use case that you care about. There are no more use cases. So AI is used to augment your use cases, but if you only have one, you should write it. But, that’s few and far between and most applications have hundreds of 1,000s of use cases perhaps or 1,000s of possible combinatorial use cases,” Surace said. 

According to Eli Lopian, CEO at Typemock, a provider of unit testing tools to developers worldwide, QA teams are still very effective at handling UI testing because the UI can often change without the behavior changing behind the scenes. 

“The QA teams are really good at doing that because they have a feel for the UI, how easy it is for the end user to use that code, and they can see the thing that is more of a product point of view and less of doesn’t work or does it not work point of view, which now is really it’s really essential if you want to an application to really succeed,” Lopian said. 

Dan Belcher, the co-founder at mabl, said that there is still plenty of room for a human in the loop when it comes to AI-driven testing. 

“So far, what we’re doing is supercharging quality engineers so human is certainly in the loop, It’s eliminating repetitive tasks where their intellect isn’t adding as much value and doing things that require high speed, because when you’re deploying every few minutes, you can’t really rely on a human to be involved in that in that loop of executing tests. And so what we’re empowering them to do is to focus on higher level concerns, like do I have the right test coverage? Are the things that we’re seeing good or bad for the users?,” Belcher said.

AI/ML excels at writing tests from unit to end-to-end scale

One area where AI/ML in testing excels at is in unit testing on legacy code, according to Typemock’s Lopian.

“Software groups often have this legacy code which could be a piece of code that maybe they didn’t do a unit test beforehand, or there was some kind of crisis, and they had to do it quickly, and they didn’t do the test. So you had this little piece of code that doesn’t have any unit tests. And that grows,” Lopian said. “Even though it’s a difficult piece of code, it wasn’t built for testability in mind, we have the technology to both write those tests for those kinds of code and to generate them in an automatic manner using the ML.”

The AI/ML can then make sure that the code is running in a clean and modernized way. The tests can refactor the code to work in a secure manner, Lopian added. 

AI-driven testing is also beneficial for UI testing because the testers don’t have to explicitly design the way that you reference things in the UI, you can let the AI figure that out, according to mabl’s Belcher. And then when the UI changes, typical test automation results in a lot of failures, whereas the AI can learn and improve the tests automatically, resulting in 85-90% reduction in the amount of time engineers spend creating and maintaining tests with AI. 

In the UI testing space, AI can be used for auto healing, intelligent timing, detecting visual changes automatically in the UI, and detecting anomalies and performance. 

According to Belcher, AI can be the vital component in creating a more holistic approach to end-to-end testing. 

“We’ve all known that the answer to improving quality was to bring together the insights that you get when you think about all facets of quality, whether that’s functional or performance, or accessibility, or UX. And, and to think about that holistically, whether it’s API or web or mobile. And so the area that will see the most innovation is when you can start to answer questions like, based on my UI tests, what API tests should I have? And how do they relate? So when the UI test fails? Was it an API issue? And then, when a functional test fails, did anything change from the user experience that could be related to that?,” Belcher said. “And so the key is to do this is we have to bring kind of all of the kind of end-to-end testing together and all the data that’s produced, and then you can really layer in some incredibly innovative intelligence, once you have all of that data, and you can correlate it and make predictions based on that.”

6 types of Automated Testing Frameworks 
  1. Linear Automation Framework – Also known as a record-and-playback framework in which testers don’t need to write code to create functions and the steps are written in a sequential order. Testers record steps such as navigation, user input, or checkpoints, and then plays the script back automatically to conduct the test.
  2.  Modular Based Testing Framework – one in which testers need to divide the application that is being tested into separate units, functions, or sections, each of which can then be tested in isolation. Test scripts are created for each part and then combined to build larger tests. 
  3. Library Architecture Testing Framework – in this testing framework, similar tasks within the scripts are identified and later grouped by function, so the application is ultimately broken down by common objectives. 
  4. Data-Driven Frameworktest data is separated from script logic and testers can store data externally. The test scripts are connected to the external data source and told to read and populate the necessary data when needed. 
  5. Keyword-Driven Framework – each function of the application is laid out in a table with instructions in a consecutive order for each test that needs to be run. 
  6. Hybrid Testing Framework – a combination of any of the previously mentioned frameworks set up to leverage the advantages of some and mitigate the weaknesses of others.

Source: https://smartbear.com/learn/automated-testing/test-automation-frameworks/