It may sound counter-intuitive to say that developers shouldn’t perform testing on the products they produce. After all, who knows a site or app better than those who created it? Aren’t developers exactly the people who should be testing software, given that they put it together and know how it’s supposed to work?

It’s this common-sense assumption that lies behind the idea that developers should assist test teams with strategies like unit testing—if they’re not taking complete responsibility for testing themselves. The latter approach—having all testing, GUI and functional, carried out by developers rather than a dedicated QA team—has been adopted by many development and digital agencies, which seems logical.

(Related: How testing works in an agile world)

It also seems prudent in terms of costs. Full-time QA teams are expensive to maintain and, because of the intermittent nature of software testing, are often underutilized. From this perspective, having developers perform testing as and when it’s needed makes more sense than employing testers who will always have too much downtime.

But as rational as all of this may sound, it’s not a good idea to do without professional software testers. The reason? Software is often more likely to be released with bugs when tested by developers, not less likely.

There are several contributing factors here, but one of the most important is that testing overburdens developers with workloads that are too heavy. A developer who has spent all day coding and is then required to put in a couple of hours testing is likely to tire quickly, lose focus and make mistakes, resulting in crucial bugs being missed.

This creates a need for post-go-live fixes, which further increase workloads and make it even more likely that exhausted developers will miss bugs on other testing projects, too. A self-defeating cycle can result, where all efforts to ensure that software works better end up making it worse.

Another contributing factor is the fact that, often, developers simply don’t have enough time to perform testing effectively. When testing has to be fitted in around development work and completed within the standard timeframe of the two or three days before a site or app goes live, it is difficult for developers to ensure good functionality by covering a broad range of Web and mobile platforms.

And broad device coverage is becoming increasingly indispensible to success online. As the mobile device market continues to expand, and as apps and websites are used on a wider array of smartphones and tablets, any piece of software that has only been tested on a handful of devices runs the risk of functioning poorly for many of its users.

Compounding these issues of time and energy is the repetitive nature of software testing itself. As a website is tested on more and more devices and browsers, the individual doing the testing can quickly develop expectations as to how that site should look on the next device or browser, and these expectations can come to replace the perception of what’s actually there, leading to bugs being missed.

For this reason, and generally speaking, a single individual should not test the same website on more than two different browsers or devices, in order to remain effective. This principle, however, can be hard to put into practice when a handful of developers need to try to test software on multiple platforms as quickly as possible.

And, in fact, developers are at a further disadvantage here precisely because they know the software they produce so well. An intimate knowledge of the functionality and design of sites and apps can lead to developers having stronger preconceived notions of what to expect when testing than professional testers, making it even more likely that obscure bugs will be overlooked.

In addition to these all of these pressures and constraints, there’s the crucial fact that software testing, as a discipline, requires a very specific set of skills, many of which are not necessarily shared with the practice of software development.

From effectively dividing up a piece of software for unit testing, to writing test scripts, deciding which aspects of a site would benefit from exploratory testing, and collating and acting on the results of successive test cycles, professional software testers structure their work in many different ways to ensure that software is exposed to the greatest amount of scrutiny possible.

Although developers may achieve a certain amount of success with a largely exploratory approach to testing their software, there can be no substitute for highly structured testing that utilizes years of professional knowledge and expertise.

In light of these drawbacks and complications, the strategy of having developers perform testing looks shortsighted and counter-productive. Not only does it often result in bugs on apps and websites escaping into the live environment and damaging user experience, it also degrades developers’ wellbeing and the quality of their work.

As well as putting exhausted developers into a self-defeating cycle of ineffective testing and post-go-live fixes, having them test software can harm their creativity and attention to detail while developing. This can pose a significant threat to the long-term vitality of development and digital agencies, which need to invest in working with professional software testers to safeguard the quality of their output.

Recognizing that testing is not a relatively unimportant, extra task that can be assigned to developers, but an essential component in the production of outstanding software, will thus enable agencies to keep their service levels high, not to mention their developers’ energy and enthusiasm, too.