Continuous integration (CI), continuous testing (CT) and continuous delivery (CD) should go hand-in-hand, but CT is still missing from the CI/CD workflow in most organizations. As a result, software teams eventually reach an impasse when they attempt to accelerate release cycles further with CI/CD. What they need to get to from both a mindset and process standpoint is a continuous, end-to-end workflow that includes CT.
While CT requires test automation in order to meet time-to-market mandates, the two are not synonymous. A common misconception is that CT means automating every test, which isn’t necessarily practical or prudent. Instead, the decision to automate tests should be viewed from a number of perspectives including time and cost savings.
How to set up continuous testing
Like CI, CD, DevOps and Agile, the purpose of CT is to accelerate the release of quality software. To enable a continuous end-to-end workflow, one should understand how CT fits into the CI/CD pipeline and how it can be used to drive higher levels of efficiency and effectiveness.
“The key thing is prioritizing,” said Mush Honda, VP of testing at software development, testing services and consulting company KMS Technology. “If you are in a state where you don’t have a live system, it’s easier to go into a mindset of automation first. I still believe not everything can be automated in most cases. For those things that you are trying to migrate off of manual testing and add a component of automated testing with a system that’s already live or near going live, I would attack it with business priorities in mind.”
Automated testing should occur often enough to avoid system disruption and ensure that business-critical functionality is not adversely impacted. To prioritize test automation, consider the business severity of defects, manual tests that take a lot of time to set up, and whether the tests that have already been automated still make sense. Also, make a point of understanding what the definition of CT is in your organization so you can set goals accordingly.
“You need to understand what you’re going to achieve [by] doing CT in measurable terms and how that translates to your application or software project,” said Manish Mathuria, CTO and co-founder of digital strategy and services company Infostretch. “Beyond that, then it depends on having the right strategy for automation. Automation is key, so [you need] good buy-in on what layers you’re going to automate, the quality gates you’re going to put on each of these types of tests for static analysis, what you are going to stop at for unit tests, what kind of pass rates you’re going to achieve. It goes upstream from there.”
Each type of automated test should be well-planned. The rest is engineering, and the hard part may be getting everyone to buy into the continuous testing process.
“Continuous testing is designed to mature within your CI/CD process,” said Nancy Kastl, executive director of testing at digital transformation agency SPR. “Without having testing as part of the build, integrate, deploy [process], all you’re doing is deploying potentially bad code quicker.”
The CT process spans from development to deployment including:
- Unit tests that ensure a piece of functionality works the way it is intended to work
- Integration tests that verify the pieces of code that collectively enable a piece of functionality are working as intended together
- Regression testing to ensure the new code doesn’t break what exists
- API testing to ensure that APIs meet expectations
- End-to-end tests that verify workflow
- Performance tests that ensure the code meets performance criteria
- Security testing to identify vulnerabilities
- Logging and monitoring to pinpoint errors occurring in production
Implementing CT may require adjusting internal testing processes to achieve the stated goals. For example, Lincoln Financial developers used to follow a waterfall methodology in which developers met with a business user or analyst to understand requirements. Then, the developer would write code and send it off for testing. The company now does Test-Driven Development (TDD), which juxtaposes testing and development. Test scripts are written and automated based on a user story before code is written. In addition, acceptance testers have been placed in development.
“When the code passes the test, you know you’ve achieved the outcome of your user story,” said Michelle DeCarlo, senior VP of technology engineering and enterprise delivery practices at Lincoln Financial.
Managing change
When code changes, the associated tests may fail. According to SPR’s Kastl, that outcome should not happen in a CT process since developers and testers should be working together from day one.
“Communication and collaboration are really key when it comes to managing changes,” said Kastl. “As part of Agile methods, your team includes software engineers and test engineers, and the test engineers need to know equally what is being changed by the software engineers and then make the changes at the same time your software engineers are changing the application.”
To improve testing efficiency, Lincoln Financial uses tools to isolate software changes and has quality checks built into its process. The quality checks are performed with different types of resources to lessen the likelihood that a change may go unnoticed.
“We try to isolate when an asset changes [so we can] make sure that we’re testing for those changes. Quite frankly, nothing is foolproof,” said Lincoln Financial’s DeCarlo. “After we’ve released to production, we also do sampling and examine the code as it works in production.”
While it’s probably safe to say no organization has achieved a zero-defect rate in production, Lincoln Financial tries to minimize issues by performing different types of scans, including listening to customer feedback via various channels, including social media, so that feedback can be integrated into the delivery stream.
Generally speaking, it’s important to understand what these software changes impact so the relevant tests can be adjusted accordingly. If a traditional automation script fails, the defect may be traceable back to the build process. If that’s the case, one can determine what has changed and what specific code caused the failure. Nevertheless, it’s also important to have confidence in the test scripts themselves.
“If you don’t have high confidence in the scripts that are traditionally run, that sort of spirals into the question of what you should do next,” said KMS Technology’s Honda. “You don’t know whether it was a problem with the way the script was written or the data it was using, or if it was genuinely a point of failure. Being able to have high confidence in the script I created is what becomes a key component of how I know something did go wrong with the system.”
Issue tracking tools like Jira help because they provide traceability from the user story on. Without that, it’s harder to pinpoint exactly what went wrong.
Some tools now use AI to enable model-driven testing. Specifically, the AI instance analyzes application code and then automatically generates automated tests. In addition, such tools use other data, such as the data that resides in other tools to understand such things as what happens in the software development process, where defects have arisen, and why tests have failed. Based on that information, the AI instance can predict and determine the risks and impacts of defects module by module.
“Model-based testing is essentially about not writing tests by a human being. What you do is create the model and let the model create tests for you, so when a change happens you are changing something a lot more upstream versus changing the underlying test cases,” said KMS Technology’s Honda. “Likewise, when a test is written and automated [by the AI instance], if certain GUI widgets change or my user interaction changes, since I did not automate the test in the first place, my AI-driven program would automatically try to define automated tests based on the manual test case. Predictive QA is more resilient to change, [which is important because] brittleness is the biggest challenge for continuous testing and automation.
How to tell if your CT effort is succeeding
The general mandate is to get to market faster with higher quality software. CT is a means of doing that. In terms of speed, tests should be running within the timeframe necessary to keep pace with the CI/CD process, which tends to mean minutes or hours versus days. In terms of quality, CT identifies defects earlier in the life cycle, which minimizes the number of issues that make it into production.
Another measure of CT success is a cultural one in which developers change their definition of “done” from the delivery of code to the delivery of tested code.
“You need the cultural belief that developers can’t say something is done until it’s been tested. Another key success indicator is when all your testing is completed in the same Agile sprint,” said SPR’s Kastl. “It’s not saying ‘I’m going to do some testing in the sprint based on the amount of time I have so I’m going to automate regression in the next sprint.’ You should not be a sprint behind. The way to make sure in-sprint testing is being done as part of a CT process is developers are merging their code and it’s ready to test on an hourly or daily basis, so testers can do their work.”
For Infostretch’s Mathuria, the high-level indicator of CT success is data that proves a build or release is certified in an automated way. A lower-level indicator is that decisions are not being made about software promotion at any pre-defined level, such as this much functional testing is enough or that much security testing is enough. Instead, what qualifies as “enough” is determined by the CT process an organization has established.
“Only exceptions are managed by people and not the base level workflow,” said Mathuria. “Once you achieve that then you see the right value from continuous testing.”
And don’t forget metrics, because success needs to be measured. If speed is the goal, what kind of speed improvement are you trying to achieve? Define that and work backward to figure out what’s necessary to not only meet the delivery target but also be confident that the release is of an acceptable quality level.
“You also need to think about skillsets. Are they able to adopt the tools necessary or not? Do they understand automation or not? Do they understand the continuous testing strategy or not?” said Honda. If you want to get to continuous anything, there has to be a timeline and a goal you have to measure up against, which ultimately defines whether you’re successful, not successful or facing roadblocks.”