DevOps and CI/CD practices are maturing as organizations continue to shrink application delivery cycles. A common obstacle to meeting time-to-market goals is testing, either because it has not yet been integrated throughout the SDLC or certain types of testing are still being done late in the SDLC, such as performance testing and security testing.
Forrester Research VP and principal analyst Diego Lo Giudice estimates that only 20% to 25% of organizations are doing continuous testing (CT) at this time, and even their teams may not have attained the level of automation they want.
“I have very large U.S. organizations saying, ‘We’re doing continuous delivery, we’ve automated unit testing, we’ve automated functional testing, we shifted those parts of the testing to the left, but we can’t leave performance testing to the end because it breaks the cycle,” said Lo Giudice.
The entire point of shifting left is to minimize the number of bugs that flow through to QA and production. However, achieving that is not just a matter of developers doing more types of tests. It’s also about benefiting from testers’ expertise throughout the life cycle.
“The old way of doing QA is broken and ineffective. They simply focus on quality control, which is just detecting bugs after they’ve already been written. That’s not good enough and it’s too late. You must focus on preventing defects,” said Tim Harrison, VP of QA Services at software quality assurance consultancy SQA². “QA 2.0 extends beyond quality control and into seven other areas: requirements quality, design quality, code quality, process quality, infrastructure quality, domain knowledge and resource management.”
What’s holding companies back
Achieving CT is a matter of people, processes and technology. While some teams developing new applications have the benefit of baking CT in from the beginning, teams in a state of transition may struggle with change management issues.
“Unfortunately, a lot of organizations that hire their QA directly don’t invest in them. Whatever experience and skills they’re gaining is whatever they happen to come across in the regular course of business,” said SQA2‘s Harrison.
Companies tend to invest more heavily in development talent and training than testing. Yet, application quality is also a competitive issue.
“Testing has to become more of the stewardship that involves broader accountability and broader responsibility, so it’s not just the testers or the quality center, or the test center, but also a goal in the teams,” said Forrester’s Lo Giudice.
Also holding companies back are legacy systems and their associated technical debt.
“If you’ve got a legacy application and let’s say there are 100 or more test cases that you run on that application, just in terms of doing regression testing, you’ve got to take all those test cases, automate them and then as you do future releases, you need to build the test cases for the new functionality or enhancements,” said Alan Zucker, founding principal of project management consultancy Project Management Essentials. “If the test cases that you wrote for the prior version of the application now are changed because we’ve modified something, you need to keep that stuff current.”
Perhaps the biggest obstacle to achieving CT is the unwillingness of some team members to adapt to change because they’re comfortable with the status quo. However, as Forrester’s Lo Giudice and some of his colleagues warn in a recent report, “Traditional software testing has no place in modern app delivery.”
Deliver value faster to customers
CT accelerates software delivery because code is no longer bouncing back and forth between developers and testers. Instead, team members are working together to facilitate faster processes by eliminating traditional cross-functional friction and automating more of the pipeline.
Manish Mathuria, founder and COO of digital engineering services company Infostretch, said that engineering teams benefit from instant feedback on code and functional quality, greater productivity and higher velocity, metrics that measure team and deployment effectiveness, and increased confidence about application quality at any point in time.
The faster internal cycles coupled with a relentless software quality focus translate to faster and greater value delivery to customers.
“We think QA should be embedded with a team, being part of the ceremony for Agile and Scrum, being part of planning, asking questions and getting clarification,” said SQA2‘s Harrison. “It’s critical for QA to be involved from the beginning and providing that valuable feedback because it prevents bugs down the line.”
Automation plays a bigger role
Testing teams have been automating tests for decades, but the digital era requires even more automation to ensure faster release cycles without sacrificing application quality.
“It takes time to invest in it, but [automation] reduces costs because as you go through the various cycles, being promoted from dev to QA to staging to prod, rather than having to run those regression cycles manually, which can be very expensive, you can invest in some man-hours in automation and then just run the automation scripts,” said SQA2‘s Harrison. “It’s definitely super valuable not just for the immediate cycle but for down the road. You have to know that a feature doesn’t just work well now but also in the future as you change other areas of functionality.”
However, one cannot just “set and forget” test automation, especially given the dynamic nature of modern applications. Quite often, organizations find that pass rates degrade over time, and if corrective action isn’t taken, the pass rate eventually becomes unacceptable.
To avoid that, SQA2 has a process it calls “behavior-based testing,” or BBT, which is kind of like behavior-driven development (BDD) but focused on quality assurance. It’s a way of developing test cases that ensures comprehensive quantitative coverage of requirements. If a requirement is included in a Gherkin-type test base, the different permutations of test cases can be extrapolated out. For example, to test a log-in form, one must test for combinations of valid and invalid username, valid and invalid password, and user submissions of valid and/or invalid data.
“Once you have this set up, you’re able to have a living document of test cases and this enables you to be very quick and Agile as things change in the application,” said SQA2‘s Harrison. “This also then leads to automation because you can draw up automation directly from these contexts, events, and outcomes.”
If something needed to be added to the fictional log-in form mentioned above, one could simply add another context within the given statement and then write a small code snippet that automates that portion. All the test cases in automation get updated with the new addition, which simplifies automation maintenance.
“QA is not falling behind because they’re actually able to keep up with the pace of development and provide that automation on a continuous basis while keeping the pass rates high,” said Harrison.
Service virtualization saves time
Service virtualization is another speed enhancer because one no longer waits for resources to be provisioned or competes with other teams for access to resources. One can simply mock up what’s needed in a service virtualization tool.
“I remember working on a critical application one time where everything had gone great in test and then when we moved the application changes to prod, things ground to a halt because the configurations in the upper and lower environment differed,” said Project Management Essential’s Zucker. “With service virtualization that goes away.”
Within the context of CT, service virtualization can kick off automatically, triggered by a developer pushing a feature out to a branch.
“If you’re doing some integration testing on a feature and you change something in the API, you’re able to know that a new bug is affected by the feature change that was submitted. It makes testing both faster and more reliable,” said SQA2’s Harrison. “You’re able to pinpoint where the problems are, understand they are affected by the new feature, and be able to give that feedback to developers much quicker.”
Infostretch’s Mathuria considers service virtualization a “key requirement.”
“Service virtualization plays a key role in eliminating the direct dependency and helps the team members move forward with their tasks,” said Mathuria. “Software automation engineers start the process of automation of the application by mocking the back-end systems whether UI, API, end points or database interaction. Service virtualization also automates some of the edge scenarios.”
AI and machine learning are the future
Vendors have already started embedding AI and machine learning into their products in order to facilitate more effective continuous testing and to speed application delivery cycles even faster. The greatest value comes from the pattern recognition pinpointing problem areas and providing recommendations for improving testing effectiveness and efficiency.
For example, Infostretch’s Mathuria has observed that AI and machine learning help with test optimization, recommendations on reusability of the code base and test execution analysis.
“As the test suites are increasing day by day, it is important to achieve the right level of coverage with a minimum regression suite, so it’s very critical to ensure that there are no redundant test scenarios,” said Mathuria of test optimization.
Since test execution produces a large set of log files, AI and machine learning can be used to analyze them and make sense out of the different logs. Mathuria said this helps with error categorization, setup and configuration issues, recommendations and deducing any specific patterns.
SQA2’s Harrison has been impressed with webpage structure analysis capabilities that learn a website and can detect a breaking change versus an intended change. However, he warned if XPaths have been used, such as to refer to a button that has just moved, the tool may automatically update the automation based on the change, creating more brittle XPaths than were intended.
The use cases for AI and machine learning are virtually limitless, but they are not a wholesale replacement for quality control personnel. They’re “assistive” capabilities that help minimize speed-quality tradeoffs.