Every piece of software ships with some defects in it, but major gaps in your testing coverage can be disastrous, and they can occur for a variety of reasons. Part of the problem with a typical root-cause analysis is that it comes after the fact. Perhaps the customer has complained, there’s been a big outage, or the software quality simply isn’t where it should be. But reactive firefighting like this isn’t the most effective use of your resources.
What if you could identify those gaps earlier in the testing process and dynamically shift strategy to close them before they cause major issues?
The best possible start
Lack of domain knowledge and an incomplete understanding of the purpose and intent of a piece of software is enemy No. 1. There are two ways you might consider mitigating this risk in order to get a good start:
Automated testing: Identify critical core functions and automate mundane repetitive tests in order to free up your testers to leverage their experience and domain knowledge to break the system.
Exploratory testing: Context-driven testing encourages critical thinking and collaboration. Testers have to understand what the application is trying to do, learn about the intent, and direct their efforts at the most important areas.
Making sure you build a solid foundation, understand the software, and define your scope at the outset can take time, but it will be time well spent.
Communication and assessment
The traditional approach with gap analysis is to start with a problem and trace it back. In theory, you should learn from your mistakes and avoid repeating them in the future, but it doesn’t always work that way. It’s also a great deal more time-consuming and expensive to identify and fix problems when they’re already released. There’s no reason you can’t apply the same logic from root-cause analysis earlier in the process.
(Related: Testing needs to catch up to agile)
The earlier testers are involved, the better. There should be wide-ranging discussions during the planning phase that include the development team, the business team, and the test team. This is an excellent opportunity to ensure that everyone understands the workflow of the new software, what it’s supposed to deliver for the end user, and where the priorities lie for validation and testing.
By including testers from day one, you reduce the chance of misinterpretation and give yourself an opportunity to spot potential gaps or misunderstandings. But the communication and assessment has to be continuous throughout the project; the second it slips, you’re running the risk of a gap opening up. You need to make sure that communication, collaboration and mitigation are an integral part of your process for each new sprint or cycle.
Developing process oversight
In order to ensure that your coverage is tight, and to immediately flag possible gaps, you need to have a good test data-management strategy in place from the beginning. You need to be using analytical tools, like JIRA, QMap, or VersionOne. Regardless of the tool, what you want is a complete record of all your manual testing activities. The ability to generate reports or visualizations of your coverage, and to analyze it from different business perspectives, is absolutely vital.
Where are most of the defects coming from? How did you find them? How critical are they? Where are your efforts being concentrated? What topics are trending in testers’ comments? Are you seeing the same questions or recommendations popping up over and over again?
There’s a good chance that you’re collecting this data anyway, but keeping an eye on all of it in real time should enable you to strategize effectively about where to focus your resources. The best approach is never carved in stone; it should continually evolve throughout the development life cycle. Rigorous and regular analysis presents an opportunity to identify gaps as soon as they begin to form, instead of waiting until they cause a problem and then tracing it back. It’s about taking a proactive approach instead of a reactive one.