Techniques such as pair programming and peer code reviews, among others, can improve the quality of software, but defects will always slip through the cracks.The aim of QA is to help your development team move faster while delivering accurate, actionable test results, and ultimately ship a higher quality product more quickly.
This is a guide to what should be important in a QA process and what shouldn’t.
- Build testing in chunks
Make your tests as composable as possible by working out commonalities in your suite. Most of the time, the common parts will be built upon to allow you to get in to the right state for your application. Avoid copy-pasting test steps; aim for a modular system. This applies for both automated and manual test suites. If using automation, make sure to have a common library of states you can reuse to get in to the correct state. If using manual testing, you will need to find a test-case management system that supports composable tests.
If starting from scratch with a new process, start with small manageable chunks. Build your test suite in the following order for greatest effect:
- Smoke tests, covering the top three to five flows through your application. An example of an e-commerce site would be, login, sign up, checkout. Choose whatever might cause a headline in a newspaper or email to the CEO if it broke.
- Happy path tests. These are similar, but less important. Generally they are the most common paths through your application.
- Bugs people have reported before should never happen twice. Add a test to your suite — whether a unit test or not — to make sure it doesn’t.
- Edge cases don’t fit in the above categories and generally aren’t in the common user flow. They have less value, so you should leave them until last. Be wary that there are always a large, near infinite, amount of edge cases and depth you can go to. Note: things that get reported are bugs, and fall in to the above category if accepted. Edge case testing looks for and protects against possible bugs.
- Balance your approach to what to cover and test
Good processes don’t test everything, they take a balanced approach. This is often counterintuitive.
It is very common to want to test every test as well as every combination of browsers. Sometimes even every variant of a page. This isn’t practical for long or necessary once you have a large application.
If you do not have a fixed list of devices you support, you may use common tooling such as Google Analytics to aid you. From this, you should end up with a list of the most used devices or browsers by your customers. Focus on the top 95% of these, or 99% if you have a higher budget. Having an official policy on which browsers you support can also help you here.
Make sure you revisit this at least once per quarter, as the needed coverage may change.
Tip: Weigh the priority of your bugs by browser popularity. Bugs for more popular devices should be fixed first.
Which areas of your product should you test? The two major ways of focusing on this are by looking at usage data or bugs.
Using tooling such as Amplitude, Mixpanel or similar, work out which areas of your product are most active. These are often managed by a product team, who tracks feature usage. By looking at the most common paths, you may focus your testing efforts there first. If your tooling supports it, look at common flows through your application.
Tip: It’s uncommon, but good practice, to run these tools in QA as well. You can use this to check that things are actually covering what’s expected.
Developers often use error reporting software. This is a great source to use to focus testing efforts inside QA. Tracking defects by area of the product, developer, team and source of spec is a good start. This will often lead to patterns emerging. Any patterns found can guide process improvements and retrospectives with developers.
- Measure to improve
If you don’t measure results, you won’t be able to show improvement. Measure the fruits of your work and the current state of things. QAs’ main aim is to help the organization ship a higher-quality product.
Primary measurements for QA:
Firstly, the number of bugs reported by or that affect customers. This is the most direct and easiest to manage. Log any reported issue by date, and if you can by product area, developer and team. Every week you should summarize this and look for patterns and report back to the team at the root.
You should be tracking the time-to-fix – how long does it take from something breaking to become fixed. Measuring this answers how a development team is able to use the output from QA to triage and fix a bugs. The simplest way to measure this is the time between a failed build and the next passing build. Be-careful to remove flakey-tests from this metric.
If you wish to go further, split the tracking out by source of the issue. Examples are: external (i.e. customer), internal (i.e. missed by QA), automatic (e.g. error reporting), or test-case failures.
Secondary measurements for QA:
The number of tests added to your suite, versus the number of regressions that failed in your suite.
Flakiness. Especially when using automation, you should track tests that pass and fail intermittently. This is an indicator of one or more of poor test quality, poor choice of system or poor execution. Execution problems can be the result of human QA tester or test environment failures. Minoring when this happens will expose patterns, allowing you to fix the root cause. Avoid being only reactive here.
NPS is a great end-measurement for your entire product, but it’s a trailing indicator. Also, it conflated many things, such as product improvements and CSM effectiveness. Still – improvements here can be used as indicators of product quality.
Test coverage, whilst prevalent, is dangerous if misused or misunderstood. It is not a measure of the quality of the tests or the thoroughness of those tests. It should only be used to work out those areas that are completely un-tested.
Note: There are no good leading indicators for production quality – only trailing indicators.
- Shift left/tighten the feedback loop
Traditionally QA gets involved later in the software development life-cycle. The earlier you can move it, the easier most issues are to fix. Why? Slower feedback cycles create distance from the problem. Developers become forgetful and shift to other tasks, losing context, plus code can change from under them.
Testing earlier surfaces errors quicker, tightening the feedback loop between QA and development.
Shortening the entire development cycle brings other benefits too. A major one is reducing the risk of shipping the wrong thing, as you get feedback from users quicker.
Some reasons to avoid shorter release cycles include:
- Regulatory issues around changes
- Shipping embedded or on-premises software
- Having customers that are highly-averse to changes. Common reasons are: large amounts of users requiring retraining or
Some common ways of moving QA earlier in the release cycle are:
Pull request based development combined with automatic environment creation. Every pull request automatically has it’s own environment setup. This can be home-rolled, or provided by an external service. This allows human powered QA, or automated integration tests, access to changes earlier in the SDLC. Setting this up can be a pain, as following good dev/ops practices are a prerequisite.
Example SaaS services for this:
Generally, non developer-based human-powered QA isn’t practical before pushing to a pull-request. This is due to speed and cost of the resources needed; developers are fast and expensive, and manual QA can be slow. Automation helps solve this. If used correctly as it may be executed on a developer’s machine. This allows developers to get feedback as early as possible in the SDLC, whilst coding. They still have context around the errors as they occur, which makes fixing them faster.
- Leverage unit testing
While this isn’t usually done by QA, unit testing is a great way to improve your products quality. Although this takes effort, as soon as you have more than one developer or a non-trivial product, it will start paying dividends.
Unit testing enables fast feedback, along super-specific error reporting. Having good tests enables your developers to be more brutal with code changes yet still be confident in the results.
To get greatest leverage from your unit tests, run them in the following order:
- The tests for files changed
- The entire feature changed. Structure or tag tests by feature to help here.
- All tests. Usually this should be for every push and run via CI only, due to speed.
- Version control
Great QA processes are always tracked in a version control system, such as Git, Mercurial, or Subversion.
While version control use is de facto standard within dev teams, and more recently operations, it’s much less common within QA teams. This shouldn’t be the case. Version control brings advantages to manual or automated tests. Keep you tests close to or in your developer workflow, along with the products code. This forces shipping the code and tests together, which results in:
- Always knowing what the expected behavior is for that code
- Being able to review and accept tests using standard code-review processes
- Keeping a history of tests
- Continuous integration
Good QA process is always part of a wider CI process. This is not primarily for speed, but for consistency. If things aren’t automated, they’ll be missed. Each release, whatever method you use – manual or automated – should be automatically run and reported on.
For human based systems, this can prove to be slow – unless it can scale to your needs. Internal teams cannot do this, so your org must accept this speed or find alternatives.
- Pluggable
Modern QA processes must integrate with your development tooling. They must support:
- Bug tracker integration. This enables tracking of defects and prioritization. Also, it gets the right information to developers fast. Make sure your process includes things like state, screenshots and logs (http, server, console).
- Integrate into your CI, even for manual processes you should block and wait for results. For automation, it must be runnable inside your CI system.
- Have machine readable results, for example via an API. Ensure feature and or test status is available, as well as detailed results.
These three allow easy as well as deep integration into your process.