Continuous testing requires automated testing to help speed the CI/CD process. The trick is to constantly expedite the delivery of code in an era when software change is not only a constant, but a constant that continues to occur at an ever-accelerating rate.
In today’s competitive business environment, customers are won and lost based on code quality and the value the software provides. Rather than packing applications with a dizzying number of features (a mere fraction of which users actually utilize) the model has shifted to continuous improvement, which requires a much better understanding of customer expectations in real time, an unprecedented level of agility and a means of ensuring software quality practices are both time-efficient and cost-effective despite software changes.
“You used to automate everything thinking you’re going to get an overall lift,” said Michelle DeCarlo, senior VP, Technology Engineering, Enterprise Delivery Practice at Lincoln Financial. “While that was true, there’s also a maintenance cost that can break you because there’s the cost of keeping things current. Now [we have] a lot more precision upfront in the cycle to identify where we should automate and where we’re going to get that return.”
Rather than simply automating more tests because that’s what seems to facilitate a shift to CT, it’s wise to have a testing strategy that prioritizes tests and distinguishes between tests that should and should not be automated based on time savings and cost efficiency.
“Before people were implementing DevOps, [they] used to say if you needed a stable application you should have one round of manual testing before you could venture into automation. Once people started implementing DevOps, testing had to happen with development,” said Vishnu Nallani Chekravarthula, VP and head of innovation at software development and quality assurance consultancy Qentelli. “One of the approaches that we have found to be successful is writing the tests before you write the code, and then write the code to ensure that the tests pass.”
While test-driven development (TDD) isn’t a new concept, it’s a common practice among organizations that have adopted CT. Whether TDD enables CT or the other way around depends on the unique starting point of an organization. With TDD and CT, automation isn’t an afterthought, it’s something that’s top of mind from the earliest stages of a project.
Adapting to constant change
While applications have always been subject to change, change is no longer an event, it’s a constant. That’s why more organizations are going to market with a minimum viable project (MVP) and improving it or enhancing it over time based on customer behavior and feedback. Since development and delivery practices have had to evolve with the times, so must testing. Specifically, testing cycles must keep pace with CI/CD without increasing the risks of software failures. In addition, testers have to be involved in and have visibility into everything from the development of a user story to production.
“You’re always able to analyze and do a sort of an impact analysis from user stories [so if] we change these areas or these features are changing, [you can come up with a] list of tests that we typically no longer need, that would have to be abated to reflect the new feature set,” said Mush Honda, VP of testing at software development, testing services and consulting KMS Technology. “So, it follows that the involvement and the engagement of the tester as part of the bigger team definitely needs to be a core component.”
While it’s always been a tester’s responsibility to understand the features and functionality of the software they’re testing, they now have to understand what’s being built earlier in the life cycle, so tests can be written before code is written or tests can be written in parallel with code. The earlier-stage involvement saves time because testers have insight into what’s being built.
Also, the code can better align with the user story. It’s also more apparent what should be automated to drive better returns on investment.
Be careful what you automate
A common mistake is to focus automation efforts on the UI. The problem with that is the UI tends to change more often than the back end, which makes those automated tests brittle. The frequency of UI change tends to be driven by the business, because when they see what’s built, they realize they’d prefer a UI element change, such as moving the location of a button.
“If you have a car and if the car breaks down, you don’t just look at the steering wheel and dashboard, so unless you have tests and sensors at individual parts of the car, you can’t really tell why it has broken down. The same idea applies to software testing,” said Manish Mathuria, CTO and co-founder of digital strategy and services company Infostretch. “In order to write tests that are less brittle, you have to test it from the bottom up and then as things change, you have to change tests at the individual layers.”
Using a layered approach to testing, errors can be identified and addressed where they actually reside, which may not be at the UI level. A layered approach to testing also helps shift mindsets away from overreliance on automated UI tests.
“If you’re in a situation where you have the type of application that is driven by a lot of business needs and a lot is changing, from a technical perspective, you don’t want to automate at the UI level,” said Nancy Kastl, executive director of testing at digital transforma- tion agency SPR. “Instead you want to automate at the unit level or the API services level.”
“Think of applying for a bank loan. In the loan origination process you go through screen after screen. [As a tester,] you don’t want to involve the whole workflow because if something changes in one part, your tests are going to have to change throughout,” said Kastl.
The concept is akin to building microservices applications that use small, self-contained pieces of code versus a long string of code. Like microservices, the small, automated tests can be assembled into a string of tests, yet a change to one small test does not necessarily require changes to all other tests in the string.
“We need to think like programmers because if something changes, I’ve got one script to change and everything else fits together,” said Neil Price-Jones, president of software testing and quality assurance consultancy NVP Testing.
However, test automation can only do so much. If change is the norm because of ad hoc development practices that aren’t aligned with the business’ expectations in the first place, then test automation will never work, according to SPR’s Kastl. Fix the way you develop software first, then you’ll be able to get test automation to work.