As the software development industry has seen unprecedented levels of digital transformation, the demand for automated testing in the CI/CD pipeline has taken on greater urgency, especially at the early stages.
Also, new advancements in AI are helping developers with some of the biggest challenges in testing: test creation, maintenance and many of the manual tasks. Many companies have noticed, and are spending more on their automated testing initiatives.
Strong testing practices have gotten to be so important that they’ve become “the main differentiator between companies that are successful and those that aren’t,” according to Guy Arieli, the QA CTO at Digital.ai.
When companies are looking for an automated testing solution, they’re primarily looking for one that will increase the quality of their releases, increase the speed at which they can be done, and the one that’s most cost-efficient. A common approach among enterprise customers is to seek out a vendor that satisfies the majority of their needs while integrating into their CI/CD pipelines.
“We see customers want a unified solution. You don’t want to be using disparate tools for end-to -end testing of different types of clients,” said Dan Belcher, the co-founder of mabl. “Increasingly they’re pushing us to add value to those end-to-end tests with insight around things like performance and visual correctness and other kinds of attributes of quality, because they’re trying to move from pure quality assurance like ‘did I break this core feature?’ to quality engineering: ‘Is the feature better than it was before? Is it faster? Is it more accessible? Is it visually appealing?’ ”
Chris Haggan, the product management lead at HCL OneTest, said it’s more than just getting the solution with the most features. It’s also about supporting users with the tools that they already have and seeing if it’s a right fit with the overall approach the development organization is taking. Another issue is whether the organization has enough resources to deal with adding testing solutions to the mix since they can add complexity.
Where to start?
To start with their automated initiatives, organizations need to build quality into the application earlier, as quality has become a core functional necessity and testing early on is a key part of that.
“We see people all the time that want to fully automate everything in weeks. Yes of course that is technically possible but it takes time to evaluate what’s important to test, how solutions fit into your CI/CD chain and who generates test data and so on, according to Kevin Surace, co-founder and CTO of Appvance. “While no one wants to hear it, the best automation strategy is one that is laid out over a year,” Surace added.
Building in quality comes down to both the culture of the organization and in executing deep code analysis, as well as deep reliability and security at the earliest stages. “If you put it at the end, you really can’t kind of accelerate your delivery. You’re always kind of running into a bottleneck at the end of the process,” said Mark Lambert, the vice president of Strategic Initiatives at Parasoft.
This has led to continuous quality and continuous compliance as aspects that need to be tested in the CI/CD pipeline.
Mabl’s Belcher said that as more expansive automated testing becomes available, his one concern is that it will create a test sprawl in which it’s so easy to create end-to-end tests and get the coverage that you want in place that perhaps teams will become more complacent about testing.
“Just because it’s easy doesn’t mean it’s right,” Belcher said. “They have to put more thought into, you know, are these tests accomplishing the objectives that I set out? Are we doing only what is necessary? Are we thinking about the data? Do we have the right environments? And there’s a lot, a lot more than just the capability to add lots of requests. We keep score by the quality of what’s in production.”
Organizations also need to prioritize those tests that need to be automated first to avoid getting overwhelmed.
“What I want to achieve is not more and more tests. What I actually want is as few tests as I possibly can because that will minimize the maintenance effort, and still get the kind of risk coverage that I’m looking for,” said Gartner senior director Joachim Herschmann, who is on the App Design and Development team.
In the past, what used to happen is that the organization used to say “we’ll recruit the developer and we’ll do the R&D and then when we need to test it, we’ll send it to India and then it will be tested there,” but now organizations realize that this has to be at the core of your R&D organization, Digital.ai’s Arieli explained.
Now as developers are starting to get more and more involved in quality, the notion of building quality as part of the application started to take hold.
“So developers have to think about how do I engender unit testing and more and more of it and when you reach the total extreme of it and you’re totally mature, they start thinking of automation,” said Anand Sundaram, the SVP of Products, UI, Device Cloud and Performance Testing at SmartBear Software.
Security and performance testing
After quality, there are other aspects of the application for which automated testing can be leveraged: security and performance testing of your APIs and microservices at the developer level before everything comes in for integration testing or the entire application comes together.
“We’ve accepted that test automation is valuable, deep code analysis is valuable and now we’re actually starting to say the same thing around security; how can we embed security in each stage so that we can build security into the pipeline,” Parasoft’s Lambert said.
Now, testers are trying to apply the same methods that they used for testing quality to security.
So that means deep code analysis to identify potential runtime exceptions that could go uncaught. As they’re moving up the stack, they’re looking to leverage unit testing for fuzzing of the underlying code and seeing how they can utilize API tests for API security testing. Developers can start to build quality and security by taking advantage of those earliest-stage validation techniques, Lambert explained.
While automated testing has received widespread recognition as a must for today’s software development environments, there are many challenges that organizations face when trying to set up effective testing strategies in their CI/CD pipelines.
At the bottom of the testing pyramid, the struggle with unit testing is that there isn’t a lot of visibility and it’s difficult to understand how much it actually covers. On top of that is the service component testing usually driven by an API. At the top of the pyramid is system and UI testing, which can be the most challenging.
Implementing all of these levels of testing can be a challenge, especially for legacy systems, since these aspects of testing were not initially accounted for when the applications were created, Digital.ai’s Arieli added.
Another challenge in implementing automated testing is finding the staff with the appropriate skill set. Testing complex enterprise applications requires business domain expertise. Also, maintaining test scripts makes it difficult to achieve continuous test automation — as automation requires teams to ensure that testing doesn’t become a bottleneck. Therefore, tests must be designed in a way that minimizes disruption to the continuous testing process. The goal is for test automation teams to build robust and reusable test scripts that don’t require constant attention and maintenance, according to Clinton Sprauve, the director of Product Marketing at Tricentis.
Organizations also need to find a way to manage and track test automation efforts across multiple tools through observability and analytics.
“There is a challenge to testing in the sense that we need to do it more frequently, we need to do it for more complex applications, and we need to do it at a higher scale. This is not feasible without automation, so test automation is a must,” Gartner’s Herschmann said.
AI and observability in automated testing
With value being a core tenet of DevOps, managers have to be able to see how each decision impacts the user experience, the revenue and entire business performance as a whole. This is why testing providers are looking to create more intelligent means of testing that can provide analytics.
Intelligent testing can be a combination of data analytics, smart heuristics and algorithms, machine learning and anything that analyzes data in real time and makes decisions or recommendations that then help solve the problem. Developers can then use that need to have instant feedback of where exactly the problem occurred and move much more quickly.
Observability is needed in the pipeline because it gives testers a clue as to where exactly the problem is, when the problem occurred and then alerts the tester. In addition to observability, automated testing solutions have also created ways to help developers with many of the pain points around testing and to speed up the process.
“At the beginning of Agile, when you start talking about quarterly releases, you could still kind of fake it, right? You could still handle quality. You would have minimal amount of time to do all of your regression testing and so forth, but you could build that into a schedule and make it work. When you move to CI/CD where change is continuous and disruptive you need to find new solutions,” mabl’s Belcher said. “And so for a few years, as an industry, we turned to, well, let’s just make us another thing that the developers have to worry about and have them write tests that do end-to-end validation. But the problem with that is that those tests relied on stability of the very thing that was changing constantly.”
“Now we realize well, maybe actually you don’t need these scripts and you can use the power of cloud computing and data analysis and machine learning and AI to make it so that it’s really simple to create the tests and then rely on the system to adapt to the change automatically rather than people needing to go in and update scripts every time you make a small change,” Belcher added.
The infusion of AI into these automated testing solutions has helped around aspects such as checking on quality, test maintenance and figuring out how to create the tests.
When you go from version one to version two, AI can help by having a system update itself and carry on without involving the developers having to go in and fix a load of things.
Also, machine learning becomes particularly important around performance testing and performance test result analysis to extract information from huge amounts of data and then help the users understand where there’s a performance problem and how to correlate that to some of the metrics that one gets from observability tools for example, HCL’s Haggan explained.
And the infusion of AI won’t mean that QA and dev teams get replaced, but rather their work will be augmented to work in tandem with more advanced tooling. AI can also relieve them of the majority of script writing and maintenance as a machine literally creates thousands of tests in minutes.
“But the impact is profound. I’d say in virtually every case over years now, AI tests found critical bugs that the standard manual or automated tests would have never found,” Surace said.
Another big trend in the automated testing space is around low code and codeless capabilities so that domain experts can build their desktop automation and know what goals they are trying to achieve with them.
Automation solutions before were very developer-centric, but vendors now are seeking to democratize capabilities to others in an organization and also to companies that don’t have the personnel or resources to do the large-scale shift left methodologies that were invented in organization like a Google, Facebook, Amazon where there are unlimited resources, according to Digital.ai’s Arieli.
Next: API and mobile testing
Parasoft’s Lambert said there is increased interest in testing in the API layer for a few reasons.
One is that API tests are quicker to run, and setting them up to be continuous tests at the API level rather than the UI level is easier and there’s less maintenance associated with it. API tests can be run more efficient because you don’t have to have all the browsers and you can execute in parallel.
Another reason is that they’re easier to debug and diagnose because they’re closer to the code.
Also, it’s easier for developers to re-execute those tests within their environment and it becomes a great communication mechanism between the test role and the developer role.
This new adoption for end-to-end testing is in the API space both for companies that offer APIs as products or for companies that have integrated API-based services into their applications and they then need to test the functionality of those APIs.
Now, teams are getting quality engineering involved in work around API testing and validation for the first time, whereas historically, that’s been strictly left to the developers, mabl’s Belcher said.
There is also a lot of opportunity for API testing because, for example, server changes can be rapidly tested there, as well as microservices, Appvance’s Surace added. Highly data driven API tests will give teams tremendous information about a new server build in a few minutes.
However, there are challenges that come up with API testing including the biggest challenge of them all: creating a test scenario that’s realistic.
“So developers will deliver you a bunch of APIs and an OpenAPI doc. That’s great. I know what each of the APIs are, but I don’t know how they are used and I have to now figure out how to chain them together. I need to figure out the payloads. I need to figure out what the data value is.” Lambert said. “With AI, we analyze how the tests are being operated, how the UI has changed, and then we can dynamically heal the tests at runtime, as well as optimize execution, and provide feedback to the development team quicker.”
As organizations move more towards an API-centric development model and microservices balloon the complexity of the ecosystem, service virtualization can help to map out the test environment and help with plugging in internal or external dependencies, which are otherwise constraints within a test environment.
Vendors have also recognized the increase in demand for mobile and that doesn’t just span phones but also smart TVs, tablets and also the growing embedded devices industry.
“People get very focused on user interfaces and performance testing and API testing, but actually there’s a whole other piece of this, which is IoT and how does that fit into the whole story as well and actually be able to test that code running on the device itself, which is what a lot of these customers have to have,” said Viktor Krantz, a senior product manager at HCL Software.
Highly regulated industries that are increasingly using embedded devices such as the medical, avionics, rail and automotive industries have special requirements that emphasize the importance of testing compliance.
The avionics industry for example requires that companies develop and test a device that will then last for 40 years. “If there’s any problem with that device 39 years later, it has to be done in the exact same version of the tool that you created 39 years ago, and test it with a tool from 39 years ago. And that’s literally a work lifetime,” Krantz said. “So it’s a crazy industry.”