Keith Klain kicked off STAREAST 2016 last week, and there was one line in his keynote that stuck with me throughout the entire conference: “If you can’t draw a straight line between your business objectives and your test approach, you’re doing it wrong.”
As I started to think of all of the little activities that make up part of my workday that do nothing to increase the happiness of Skytap’s customers or reaching our business objectives, Klain’s statement sent me into a bit of a panic.
I quickly tried to justify some of these non-contributive parts of my day as being simply the way it is, and therefore unavoidable. I even felt myself saying, “Well, nobody has complained, or asked you to do that faster.” But then I felt worse. Is that how we make decisions about the work we do? Wait until someone is unhappy with our work before we try deliver a product faster and/or of higher quality? Of course not.
(Related: SmartBear simplifies API testing)
It’s so easy to get stuck in the “But that’s the way we’ve always done it” mindset. But in software delivery, this way of thinking is the epitome of “You’re doing it wrong” and is a recipe for disaster.
I would argue that one of the most glaring examples of “doing it wrong” is investing in modernization initiatives, methodologies and tools, and then trying to validate and prove the value of those investments with metrics that are anything but modernized, and are downright archaic.
What are outdated metrics? As Klain put it, any numbers you can’t connect with a straight line to meeting (and ideally exceeding) your business objectives.
Annette Ash also spoke at STAREAST, echoing Klain’s point. Her quality metrics session showed how bug total-related metrics and bug bounty programs, while perhaps providing some value, have an undeniable history of causing rifts between testers and developers that do more harm than good. And as organizations continue to increase their investments in agile and DevOps—and the cultures they require for successful scaling—a rift between any department quickly becomes a bottleneck that grinds both speed and quality to a halt.
Quality metrics based solely on finding more bugs may appear to benefit testers and the management whom they report to, but if you can’t connect those findings to an increase in customers and/or an increase in the satisfaction of your existing customers, how valuable can they really be? “We found more bugs” has nothing to do with reducing the time it will take to eliminate them, achieving shorter release cycles, and the ability to meet business demands—all of which testing is more than capable of providing to their organizations.
“As an industry, we haven’t done a lot to increase our value prop,” said Klain. “Instead of learning the language of the business, we’ve created our own language full of jargon that nobody understands—not even ourselves.”
This is not a death sentence for software testers; in fact, it’s far from it. It’s simply time for testers to stop measuring—and for management to stop requesting—what doesn’t matter to the rest of the business and the customers you serve.
And this is being felt in other departments that have been just as siloed and time-boxed as testing has throughout the years. I recently wrote about how the training industry is going through this same transformation. This is another department that has been marginalized, and not because the work they provide isn’t valuable, but because the numbers they’ve held themselves accountable to have nothing to do with the success of the business they’re supporting.
So, why are “find more bugs,” and in training’s case “teach more classes,” still being used as key benchmarks for success? The easy answer (though not a good one) is that people are reluctant to change, and they always will be. I believe that the testing and training communities have a tremendous opportunity to change how they’re viewed and valued by embracing modernization initiatives designed to provide much more meaningful metrics and faster feedback.
While there is no silver bullet for which modernization effort(s) are best, those that focus on automating the steps in the SDLC that offer no benefit when performed manually are where organizations should look first. These are efforts around implementing Infrastructure-as-Code solutions, like automating the provisioning of dev/test environments, so that teams aren’t losing valuable testing time while waiting on IT. Other smart investments include increasing efforts around Continuous Integration, version control, and parallel testing so that testing can be performed far earlier in the SDLC and consistently throughout it.
There has been so much debate around automation, and conferences like STAREAST prove that many are still struggling with knowing exactly where automation makes sense (and understanding that automation does not replace good testers) in their organization.
Keith Klain has a long history as the global head of QA at a number of the world’s largest financial institutions, as well as years as a consultant to organizations of similar size. And in a somewhat frightening revelation during his keynote, he shared an observation that he’s honestly had to make too many times. He remarked that many companies “are really horrible at defining their objectives for why they’re testing,” and that “People are still not aligning their test approach with their business.”
My last session of the week was given by Lee Barnes, and was brilliantly titled, “Don’t Be Another Statistic! Develop a Long-Term Test Automation Strategy.” Barnes gave his three requirements for what a successful long-term test automation strategy must deliver, and noted that too many people incorrectly assume—you guessed it—that “finding more defects” is one of them. Barnes stated that your end goal for automation should be:
- More test coverage
- Decrease test cycle time
- Increase resource value
These are the types of meaningful metrics that testing has the opportunity to be proud of, and to deliver to the rest of their organization. Increasing test coverage will often come from having more time for manual, exploratory testing. Decreasing test cycle time proves that testing is the last thing from a bottleneck. And increasing resource value can mean everything from an increase in efficiency and job satisfaction, to decreases in burnout or underutilized talent.
My hope, largely because of the respect I have for the testing community, is that testers themselves lead the charge of becoming better aligned with business objectives, and are then able to deliver modernized, meaningful metrics to the world around them.