Code coverage is a popular technique software teams use to measure the quality of their code, but how effective is it? Codacy, an automated code review and code analytics provider, revealed a study on how different companies perform code coverage, and what happens when they don’t.

“Code coverage is arguably one of the most useful software metrics: At a glance, developers can evaluate the proportion of the code that has been tested,” said Jaime Jorge, CEO and cofounder of Codacy. “This was confirmed by conversations with thousands of developers, but we wanted to quantify the benefit of code coverage and share the best practices.”

According to the study, those who tracked their code coverage found a higher quality of code and less time needed to maintain that code. Those who didn’t monitor code coverage believed more time was needed to maintain their code, and those who didn’t enforce code coverage at all cited technical debt as their biggest problem.

(Related: How to avoid the pitfalls of automated testing)

“In itself, code coverage is just a number, but having code coverage policies in place helps foster a culture of continuous testing and code quality, which is very important,” said Jorge. “It helps improve development speed by identifying code that needs further testing, and weed out potential bugs early in the development life cycle. This also translates into less technical debt to grapple with in the future.”

Code coverage also allows users to see what was tested, what wasn’t, and why, as well as give teams confidence in their quality of code, Jorge explained. “Say 60% of the code is covered (i.e. tested) and works wonderfully way. What about the remaining 40% that was not tested? Is it reasonable to ship an application in this condition? For that reason, we found that on average projects that do track their coverage numbers and require a minimum threshold usually set a target of 80%,” he said.

However, there are also problems when it comes to code coverage. According to Jorge, while code coverage can tell you what was tested, it can’t tell you how good the underlying tests were. “This is in fact a weakness of code coverage: As a measure of code quality, it is only as good as the quality of the tests that drive it,” he said. “Also, we have seen cases where setting a 100% code coverage target induced developers to game the system by writing unit tests that just increased coverage, at the cost of actually testing the application meaningfully. This is particularly problematic when they face extreme time pressures.”

As a result, Jorge believes a good code coverage policy should go hand in hand with good testing practices and systematic code reviews. In addition, he added that using Continuous Integration can also help ensure the level of coverage and detect problems early on.

Other findings of the report included that only half of the respondents track code coverage; about 35% require a minimum code coverage threshold; and less than 20% have rules in place for code coverage.