Test managers are under pressure to test faster and deliver software with fewer defects. Improving these two factors — velocity and defect detection effectiveness — requires a balanced mix of people, processes and tools. Orchestrated properly, anyone can increase a team’s testing delivery speed and trap more defects before the final software release.

A multitude of supporting key performance indicators (KPIs) will help you achieve velocity and DDE. By meeting and exceeding the KPIs listed below, you’ll be inching your QA organization towards greater efficiency and optimization. More QA testers won’t solve your problems. Often times, even automation is not the silver bullet, because it can introduce more overhead and maintenance than necessary, along with long-term costs. The answer you’re looking for is in the data.

Nailing down your philosophy on QA scorecards and KPI monitoring is the key to unlocking the full potential of your QA organization. Here are 12 KPIs to track:

  1. Active defects: Tracking active defects is a simple KPI that you should be monitoring regardless. The active defects KPI is better when the values are lower. Every software IT project comes with its fair share of defects. Depending on the magnitude and complexity of the project, there may be 250 or more defects active at any given time. The word “active” for this KPI could mean the status is either new, open or fixed (and waiting for re-test). If the defect is getting “worked,” then it’s active. Set the threshold based on historical data of your IT projects. Whether that’s 100 defects, 50 defects or 25 defects – the threshold will determine when it is OK. Anything above the threshold you set is “Not OK” and should be flagged for immediate action.
  2. Authored tests: This KPI is important for test managers because it helps them monitor the test design activity of business analysts and testing engineers. As new requirements are written, develop associated system tests and decide whether those test cases should be flagged for the regression test suite. Is the test that your test engineer is writing going to cover a critical piece of functionality in your Application Under Testing (AUT)? If yes, then flag it for your regression testing suite and slot it for automation. If no, then add it to the bucket of manual tests that can be executed ad hoc when necessary. Track the “authored tests” in relation to the number of requirements for a given IT project. In other words, if you subscribe to the philosophy that every requirement should have test coverage (i.e. an associated test), then set the threshold for this KPI to equal the number of requirements or user stories outlined for a sprint. That would equate to one test case for every requirement in “Ready” status.
  3. Automated tests: This KPI is challenging to track. Generally speaking, the more automated tests in place – the more likely it is that you’ll trap critical defects introduced to your software delivery stream. Start small and adjust upwards as your QA team evolves and matures. Set a threshold that 20 percent of test cases should be automated.
  4. Covered requirements: Track the percentage of requirements covered by at least one test. One hundred percent test coverage should be the goal. The validity of a requirement hinges on whether a test exists to prove whether it works. The same holds true for a test that lives in your test plan. The validity of that test hinges upon whether it was designed to test out a requirement. If it’s not traced back up to a requirement, why do you need the test? Monitor this KPI every day and then question the value of orphaned requirements and orphaned tests. If they are orphaned, find them a home by tracing them to a specific requirement.
  5. Defects fixed per day: Don’t lose sight of how efficiently development counterparts are working to rectify the defects brought to their attention. The defects fixed per day KPI ensures that the development team is hitting the “standard” when it comes to turning around fixes and keeping the build moving forward.
  6. Passed requirements: Measuring passed requirements is an effective method of taking the pulse on a given testing cycle. It is also a good measure to consider during a Go/No-Go meeting for a large release.
  7. Passed tests: Sometimes you need to look beyond the requirements level and peer into the execution of every test configuration within a test. A test configuration is basically a permeation of a test case that inputs different data values. The passed tests KPI is complementary to your passed requirements KPI and helps you understand how effective test configurations are in trapping defects. You can be quickly fooled into thinking you have a quality build on your hands with this KPI if you don’t have a good handle on the test design process. Low quality test cases often yield passing results when in fact there are still issues with the build. Ensure that your team is diligent in exercising different branches of logic when designing test cases and this KPI will be of more value.
  8. Rejected defects: The rejected defects KPI is known for its ability to identify a training opportunity for software testing engineers. If your development team is rejecting a high number of defects with a comment like “works as designed,” take your team through the design documentation of the application under test. No more than five percent of submitted defects should be rejected.
  9. Reviewed requirements: The reviewed requirements KPI is more of a “prevention KPI” rather than a “detection KPI. This KPI focuses on identifying which requirements (or user stories) have been reviewed for ambiguity. Ambiguous requirements lead to bad design decisions and ultimately wasted resources. Monitor whether each of the requirements has been reviewed by a subject matter expert who truly understands the business process that the technology is supporting.
  10. Severe defects: This is a great KPI to monitor, but make certain that your team employs checks and balances when setting the severity of a defect. After you ensure the necessary checks and balances are in place, set a threshold for this KPI. If a defect status is Urgent or Very High, count it against this KPI. If the total count exceeds 10, throw a red flag.
  11. Test instances executed: This KPI only relates to the velocity of your test execution plan. It doesn’t provide insight into the quality of your build, instead shedding light on the percentage of total instances available in a test set. Monitor this KPI along with a test execution burn down chart to gauge whether additional testers may be required for projects with a large manual testing focus.
  12. Tests executed:  This shouldn’t be your only tool to monitor velocity during a given sprint or test execution cycle. Pay close attention to the KPIs described above. This KPI is more a velocity KPI, whereas some outlined above help monitor “preventative measures” while comparing them to “detection measures.”