I was recently hired to do an in-depth analysis of the software testing tool marketplace. By the way, there are more tools in the software testing space than a do-it yourself, home improvement warehouse. Given this opportunity to survey a broad set of software testing tool vendors, it was pretty interesting to look at the promises they make to the market. These promises can be split up into four general categories:
- We provide better quality
- We have AI and we are smarter than you
- We allow you to do things faster
- We are open-source – give it a go
What struck me most was the very large swath of software testing tool vendors who are selling the idea of delivering or providing “quality.” Placing this into pointed analogy: claiming that a testing tool provides quality is like claiming that COVID testing prevents you from being infected. The fact is when a testing tool finds a defect, then “quality” has already been compromised. Just as when you receive a positive COVID test, you are already infected.
Let’s get this next argument out of the way. Yes, testing is critical in the quality process; however the tool that detects the defect DOES NOT deliver quality. Back to the COVID test analogy, the action of wearing masks and limiting your exposure to the public prevents the spread of the infection. A COVID test can assist you to make a downstream decision to quarantine in order to stop the spread of infection or an upstream decision to be more vigilant in wearing a mask or limiting your exposure to high-risk environments. I’m going to drop the COVID example at this point out of sheer exhaustion on the topic.
But let’s continue the analogy with weight loss – a very popular topic as we approach the holidays. Software testing is like a scale, it can give you an assessment of your weight. Software delivery is like the pair of pants you want to wear over the holidays. Weighing yourself is a pretty good indicator of your chances to fit into the pair of pants at a particular point in time.
Using the body weight analogy is interesting because a single scale might not give you all the information you need, and you might have the option to wear a different pair of pants. Let me unpack this a bit.
We cannot rely on a single measurement nor a single instance of that measurement to make an assessment of the quality of an application. In fact, it requires the confluence of many measurements both quantitative and qualitative to assess the quality of software at any particular point in time. At a very high level there are really only three types of software defects:
- Bad Code
- The code is poorly written
- The code does not implement the user story as defined
- Bad User Story
- The user story is wrong or poorly defined
- Missing User Story
- There is missing functionality that is critical for the release
Using this high-level framework there are radically different testing approaches required. If we want to assess bad code, then we would rely on development testing techniques like static code analysis to measure the quality of the code. We would use unit testing or perhaps test-driven development (TDD) as a preliminary measurement to understand if the code is aligned to critical function or component of the user story. If we want to assess a bad user story, this is where BDD, manual testing, functional testing (UI and API) and non-functional testing takes over to assess if the user story is adequately delivered in the code. And finally, if we want to understand if there is a missing user story this is usually an outcome of exploratory testing when you get that ‘A-ha’ moment that something critical is missing.
Let’s refresh the analogy quickly. The scale is like a software testing tool and we want to weigh ourselves to make sure we can fit into our pants, which is our release objective. The critical concept here is that not all pants are designed to fit the same and the same is true for software releases. Let’s face it, our software does not have to be perfect — and, to be blunt, “perfection” comes at a cost that is far beyond an organization’s resources to achieve. Therefore, we have to understand that some pants are tight with more restrictions and some pants are loose, which give you more comfort. So, you might have a skinny jeans release or a sweatpants release.
Our challenge in the software development and delivery industry is that we don’t differentiate between skinny jeans and sweatpants. This leads us to a test-everything approach ,which is a distinct burden to both speed and costs. The alternative, which is the “test what we can” approach, is also suboptimal.
So, what’s the conclusion? I think we need to worry about fitting into our pants at a particular point in time. There is enough information that currently exists throughout the software development life cycle and production that can guide us to create and execute the optimal set of tests. The next evolution of software testing will not solely be AI. The next evolution will be using the data that already exists to optimize both what to test and how to test it. Or in other terms, we will understand the constraints associated with each pair of pants and we will use our scale effectively to make sure we fit in them in time for the holiday get- together — of less than 10 close family members.