Almost exactly one year ago, Forrester confidently predicted that 2018 would be “the year of Enterprise DevOps.” The blog, authored by the late Robert Stroud, began:

DevOps has reached “Escape Velocity.” The questions and discussions with clients have shifted from “What is DevOps?” to “How do I implement at scale?”

Continuous testing is not far behind. In early 2014, SD Times proclaimed “Forget ‘Continuous Integration’—the buzzword is now ‘Continuous Testing’” (in the very first article in the publication’s Continuous Testing category).  At the time, the concept of continuous testing seemed about as far-fetched as a Silicon Valley snowstorm to most testers in enterprise organizations—where pockets of DevOps were just surfacing among teams working on “systems of engagement.”

But since 2014, the world has changed. As Forrester predicted, the vast majority of enterprise organizations are now actively practicing and scaling DevOps. And the larger focus on Digital Disruption means that it’s now impacting all IT-related operations: including systems of record as well as systems of engagement.

When ExxonMobil QA manager Ann Lewis so memorably asked, “Is it all just a bunch of hype? Really?” at the Accelerate 2018 Continuous Testing conference, the clear consensus was a resounding “no.” Digital transformation, DevOps and continuous testing have gotten real for the conference attendees, largely composed of QA leaders across Global 2000 organizations. So real, in fact, that their employers cleared their schedules for a week and sent them to Vienna to learn what’s really needed to achieve Continuous Testing for DevOps…in an enterprise environment.    

Here are some of the key lessons learned—shared by leading testing professionals that have already made continuous testing for DevOps a reality in their own organizations:

“Test data is a pain the ass”
Renee Tillet, Manager of DevOps Verify at Duke Energy, offered her perspective on one of the most underestimated pains of Continuous Testing: Test Data Management. Renee asserted:

“If you’re doing test automation, what’s the biggest pain in your ass? It’s test data. We would be in the middle of our sprint—the developers are done, the testers are getting ready to test, and guess what? The tester has no test data. Not only does he not have test data, but he doesn’t have time to go create that test data now. It’s too late.

By the time you get to that user story, your definition of ready should include not just what the developer needs, but also the test data you need to verify it. The test plan needs to be ready, and the data needs to be in the environment—or we don’t accept that story into the sprint.

Initially, we would create parameterized test cases, we’d put data in them, and they would run in the Dev environment. But then we’d try to run them over in the test environment, which was the next higher environment, and they would fail because the data was different. So, we came up with a data strategy that allowed us to use the same test data in all the environments.”

Number of test cases: Less is more
Numerous experts shared that a high number of test cases is no longer something to be worn as a badge of honor. It’s doesn’t help provide the fast feedback that the team expects.  

Andreas Aigner, head of service and security management at the Linde Group, explained:

“We have a lot of examples in the past where we were been proud of having 3,000+ test cases that ran continuously without uncovering any defects. I said, ‘Is that successful? Does that make sense? Don’t you think you have burned resources? At the end of the day, you have to search for high-value test automation, and you have to focus on the business risks.”

Martin Zurl, SPAR ICS, added:

“We rely on risk-based testing to prioritize our test cases. We need to understand the way our customers are thinking and test the most important features—not every feature—because we need to speed up our automation. We need to give developers feedback extremely fast, so we focus on the main paths that our customers follow.”

Democratize test automation
Test automation is just one of the many elements required for continuous testing, but you simply can’t do continuous testing without high levels of test automation. QA leaders across organizations agreed that making test automation accessible and enabling business experts to control their own automation is key for jumpstarting and scaling test automation.

Amber Woods, VP of IT enterprise applications and platforms at Tyson Foods, introduced the concept of democratizing test automation:

“Other scripting tools for test automation were not well adopted well because they didn’t really get traction within each of the teams. We’ve had success democratizing citizen data scientists and citizen integrators with applications like SnapLogic. Now we’re taking that same approach to test automation, using model-based test automation. This allows our business analysts to start test automation in an easy, fast way that will get us away from what we had before, which was a lot of scripting. Our goal is to get heavy, heavy adoption in the test automation space. 

Say you’ve got Team A over here, and the Team B over there. Team B’s leaving at a decent hour of the night, and Team A is working all night. Team A asks, ‘Why are you leaving so early? Don’t you have more testing to do?’ Team B responds, ‘Well we’ve got all our testing automated. I’m going to push a button and I’m going go home for the night.’ That gets teams to adopt test automation.”

Likewise, Ann Lewis, quality manager at ExxonMobil, spoke to the power of enabling more team members to “control their own automation”:

“What warmed my heart is that about six months after we really started getting into test automation, one of the business COE managers called me up and said ‘Wow, where did this come from? I want to put it the hands of all of my business process experts. For the first time, we can control our own test automation. Test automation helps us ensure that, over and over again, business critical functionality works after each application change.’ That actually started a competition amongst different business units—everybody wanted to get on that bandwagon.”

API testing is a faster, more stable way to test ~80 percent of your functionality
Sreeja Nair, product line manager at EdgeVerve, explained why their journey to continuous testing included API testing as well as test automation:

“UI testing is slow—for example, it can take 3 minutes to automate an end-to-end banking flow at the UI level. And if the UI is not ready or it is down, you can’t test at all. Is that a good way to test? Obviously not. We found that the best way to address our problem is to attack the layer below UI presentation layer: the business layer. We realized we could cover 80% of our functionality if we test at the business layer through APIs.  We decided to change our tests from UI-oriented design to API based design.

After we first define our test model, we find out which APIs need to be called and then chain the APIs together according to the component model we have designed. Testing a single API is not API testing. If you have a business scenario to test, you need to integrate your APIs to create realistic service-level integration tests.”

In-sprint testing can’t focus (exclusively) on new tests
Aaron Carmack, automation architect and product owner at Worldpay, explained that one of their keys to advancing from “test automation zero to continuous testing hero” was recognizing that updating test cases as your application evolves is just as important as adding new ones:

“Our QA teams sit down with the dev team and the product owners as user stories are created to learn what these stories involve and what test cases will need to be updated. Once the sprint begins, we start updating those test cases, creating the new critical scenarios that we need, and updating the existing test that we believe will be impacted by the new user stories. We’re updating tests, creating new tests, and then executing tests based on the new user stories—all within the sprint. Also, when we execute the full regression suite, we identify the failures and commit to addressing them within the sprint. That way, false positives don’t undermine our CI/CD process.”