Testing in DevOps is as much about the people that are behind the tools as it is about the tools themselves. When they work in synchrony, organizations can see major benefits in the quality of their applications and software development life cycle processes. 

A recent report called The Role of Testing in a DevOps Environment found that more than 50% of survey respondents said the greatest value of testing is that it enables teams to release updates and applications faster with confidence. In addition, nearly 60% of respondents experienced a reduction in issues once applications were in production.  

The survey was conducted during July and August of 2021 by Techstrong Research. Over 550 individuals that are familiar with application testing and DevOps completed the survey. 

RELATED CONTENT: 
How these companies help organizations test applications in DevOps environments
A guide to testing tools for use in DevOps environments

However some organizations still struggle with how to advance their DevOps testing initiatives because they are also implementing containerization, microservices, and other cloud-native methods that can sometimes complicate the environment. 

According to the survey, a combined 82% of respondents experienced either frequent or some slowdown in testing new software releases. Most organizations defer releases until testing is done because quality trumps speed, according to the report. 

In some organizations, those responsible for testing need to keep up with changes forced onto them by other teams, third-party applications, and platforms and also keep up with the growing list of regulatory compliance.

Since most of the applications rest on the cloud, businesses also must quickly react when cloud-based platforms receive updates. “For example, if APIs and other pre-built connectors are no longer working with a cloud-based office productivity suite, employees don’t want to hear ‘It’s not our fault, our cloud vendor had a major update.’ It’s up to internal IT teams to make sure applications work as expected,” the report stated. 

The demand for speed and quality has prompted organizations to look towards a way to automate many of the facets of testing and changing the way that they define value. 

“DevOps requires that testing is fast, accurate, meaning low false positive and low false negative rates, and runs without human intervention. Fast can be achieved with more compute power but for the tests to be accurate they need to handle the dynamic and evolving nature of modern applications,” said Gil Sever, co-founder and CEO of Applitools.

Traditional test automation requires frequent and human intervention to update the tests through assertions and navigation, but AI has the ability to learn how the application behaves and respond appropriately, reducing the human intervention. “This makes AI essential for modern software development teams to keep pace with increased release frequency,” Sever added.

But shifting everything to the DevOps mentality of automation is not an overnight process and in some cases, the ideal delivery story won’t even apply to every company or any project, according to Marcus Merrell, senior director of Technology Strategy at Sauce Labs.

“Not all systems can do true DevOps,” said Gareth Smith, general manager of Keysight Technologies. “If I am building a retail website, and it just requires a simple thing, then that’s fine. But if I’m rolling out something that needs to work with various IoT connectors, then not all platforms are able to automate all that.”

QA brings all hands on deck for testing

Testing in DevOps has also seen the growing importance of QA teams in handling the responsibilities of testing.

Quality engineering is being elevated because the C-level sees quality engineering as a key enabler. While developers used to throw things over the wall to QA, they’re bringing QA into the conversation and the industry is seeing much more collaborative DevOps teams, where quality is a shared responsibility between developers and QA and even product owners, according to Dan Belcher, co-founder of mabl.

The interweaving of the maintenance and automation aspects of testing with the speed of DevOps has led to the new term QAOps. 

“Much in the same way that we would think of shifting left as looking at those defects early on because they are then cheaper to fix, now it’s a much greater level of having the whole structure of QA early on and throughout the DevOps cycle,” Belcher said. 

Belcher added that now the CTOs are driving the transformations. “Now it’s a mandate coming from the C-level, to make investments in quality engineering to enable these transformations, whether it’s digital, or DevOps, or UX.”

While many large organizations keep a central QA department, we’re seeing more and more of a shift to automation developers and manual testers being assigned to individual Squads, with a Center of Excellence to support the tools. This allows testers to remain focused on business needs and not worry so much about test infrastructure or tooling, according to Sauce Labs’ Merrell.

While there are still people in the organizations who are responsible for testing as part of their job title, it has also become much more of an all-hands on deck approach in DevOps. 

In leading organizations, software quality has become everyone’s responsibility and has expanded beyond “does it work” to “is it the best customer experience”. Developers are increasingly involved, as well as others such as UI/UX designers and domain experts, to ensure the digital experience is not only working but that it is delivering on the goals of the business, according to Applitools’ Sever. 

“This approach of having all hands on deck is beneficial because with the fast feedback cycles of DevOps, it’s much easier for a developer to understand the impact of a change that they’ve made, possibly before it’s gone through a dedicated QA cycle,” said Chris Haggan, product management lead at HCL OneTest. “If it breaks a regression test in a pack, they can see that instantly, and be in there fixing it really quickly, whereas if you still have those handoff processes, it slows things down and you don’t get that feedback.”

AI and automation are key components of testing in DevOps

AI automation tools are necessary to provide insight by ingesting data from a plethora of data sources.

“Once you move to automated testing and a more integrated process, it enables you to check on things every step of the way and see whether you’re still on the right track,” said Joachim Herschmann, senior director and analyst on the Application Design and Development team at Gartner. “I can see the direct impact that my development, bug fixing and enhancements have whether they improve or make it worse.”  

The more data that can be thrown at AI, the better the result is because it includes all of the subtle variants and different data from all the different sites that one connects it to. 

“You can also use it right now to auto generate the test asset universe, what we refer to as the digital twin,” Keysight’s Smith said. Users of the ‘digital twin’ can define what type of test they want and the AI will work out what the best test scenario for that situation is. 

Execution speed can be increased by assigning more resources to the problem, and the key benefit to AI is its ability to learn and improve the tests over time with minimal human intervention, Applitools’ Sever said. 

There are several areas where AI has the potential to help with testing: smart crawling although it is still in its infancy, self-healing which is already well established and understood, and visual validation.

“For visual validation to be effective, it must be accurate to ensure the team is not overwhelmed with false positives — a problem with the traditional pixel-based approach. It needs to handle dynamic content, shifting elements, responsive designs across different screen sizes and device/browser combinations – as well as provide developers and testers ways to optimize the review and maintenance of regressions,” Sever said. 

Automation can also help with typically manual-centric types of tests such as UX testing. UX testing still requires manual input because here the outcomes of a test are subjective. However, testers don’t need to run the tests manually for every device because they can watch tests being run on a desktop app and then decide whether the quality is acceptable or not in an assisted manual testing fashion, mabl’s Belcher explained. 

“A real simple example is if I’m halfway through entering my credit card details, and I talk to somebody, I roll forward my device, my device goes flat, it rotates and then I come back. Now with that accidental rotation of the device and back, does that still work,” Keysight’s Smith said. “And in many cases, that particular use case or between those, between filling in field six and field seven on a form, then you rotate the device; no one will test that particular combination, but those happen in the real world. That’s where AI can help look at those different combinations as you’re going through the usual continuous tests.”

DevSecOps now a top priority 

One of the biggest trends of 2021 is that security became a top priority for testing in the wake of massive breaches that resulted in tremendous costs. 

The Executive Order on cybersecurity that the Biden administration signed in May helped to put security awareness in the spotlight, according to Jeff Williams, the co-founder and CTO of Contrast Security.

“I think it’s a real harbinger of better security for apps in the future that they require a minimum standard for AppSec testing, much-improved visibility into what you’ve done to secure your code, including things like security labels,” Williams said. “I look forward to a day when you can go to your online bank, insurance company, social media, or your election system and if you want to know a little bit about how that software was built, and how it was tested for security, it should be available to you; that should actually be a fundamental right. If you’re trusting your life, or your healthcare, or your finances, or your government to a piece of software, I think you have the right to know a little bit about how it was tested for security.”

However, security isn’t always handled with the utmost care at organizations. A lot of this comes down to a lack of security expertise, according to Williams. 

There’s never enough attention being paid to security, in testing, or in development. As hard as test/security vendors work to keep up, the bad actors always seem to be one step ahead–aided by the fact that they’ve been every bit as institutionalized as the products they’re subverting, according to Sauce Labs’ Merrell. 

Security testing has traditionally required a lot of expertise to run tools such as SaaS or desktop scanners, or even SCA scanning tools. 

Because there are not enough security experts, people don’t try to shift that security testing left and distribute it across their projects. Instead, they keep it centralized and do it once towards the end of the development process in some gate before code visible production, which is super inefficient, Contrast Security’s Williams added. 

“You can’t just take tools designed for security experts and hand them to developers early in the process and just say ‘Go,’” Williams said. “They’ll end up with tons of false alarms and tons of wasted time, they won’t be able to tailor the tools properly, and they’ll end up really frustrated with security.” 

This has created a need for tools that can be packaged in a way and in the right place for developers to use. 

“There still is a role for expert-based pentesting and expert threat modeling and things like that. But they should work at the margin. Instead of trying to do everything with a pen test, including the stuff that your tools already did a great job at, have your pen testers focus on the things that are hard and difficult for tools,” Williams said. 

For example, a pen tester can come in and look at the access control scheme to find ways to bypass access controls by accessing as admin. That can then be used as an opportunity to strengthen the pipeline and build automated tests, Williams added. 

Evolving testing in DevOps is primarily a people process

Although tooling is necessary, testing in DevOps is also about a mindset shift on the part of the people in an organization and on making the process easier. After all, they will still have a major part to play in testing in the near future. 

Organizations are showing a strong preference for low code and test automation solutions as opposed to script-based solutions. They are also looking for unified quality engineering platforms, rather than best-of-breed point solutions for various aspects of testing, according to mabl’s Belcher. 

Although AI is being applied to a growing number of use cases as part of testing in DevOps, some experts agree that there will always be humans in the loop and that the purpose of those underlying frameworks is to supercharge those people. 

The next leap in the field is going to be autonomous testing where the team will steer the AI at a high level, review if the AI did the right thing and then spend most of their time focused on more strategic work, such as the usability of the application, according to Sever. 

“AI is still an emerging technology, and its role in testing is evolving constantly. The most visible type of AI tooling we see is around AI-assisted automated test creation,” Merrell said. “These tools, while extremely useful, are still no substitute for the human mind of a tester, nor do they take the place of a skilled test automation developer.”