The most successful development teams have these four key benchmarks in common, revealed CircleCI’s 2023 State of Software Delivery Report

Successful teams have workflow durations less than 10 minutes, recovery from failed runs in under an hour, success rates above 90% in the default branch of their application, and deployments at least once per day, though the actual number depends on the business. 

Workflow duration is a measure of how efficiently a software delivery pipeline provides feedback on the quality of code. The report states: “An exclusive focus on speed often comes at the expense of stability. A pipeline optimized to deliver unverified changes is nothing more than a highly efficient way of shipping bugs to users and exposing your organization to unnecessary risk. To be able to move quickly with confidence, you need your pipeline to guard against all potential points of failure and to deliver actionable information that allows you to remediate flaws immediately, before they reach production.” 

To achieve productive feedback throughout the pipeline, extensive testing is needed at all stages, so the optimal pipeline is the shortest time it takes to run through all of those tests. The 10 minute benchmark seems to be the shortest time to generate those test results.  

For companies surveyed in the report, the median performance was 3.3 minutes. 

Mean time to recovery measures the average time that it takes to go from a failed build signal to a successful pipeline run. For companies that have created a pipeline where they have a complete picture of their code health and possible failure points, it is easier to bring systems back to a deploy-ready state following a failure. 

“Diagnosing the failure and implementing a fix becomes a matter of evaluating test output and correcting or reverting defects rather than embarking on an endless bug hunt,” the report states. 

The report also revealed that while the benchmark for this metric is 60 minutes, the median performance across companies is slower than that at 64 minutes. 

Success rate is defined as the “number of passing runs divided by the total number of runs over a period of time.”

According to CircleCI in the report, a failed signal isn’t necessarily a bad thing, as the more important metric is the team’s ability to ingest the signal quickly and fix the error. 

Survey respondents fell below the industry benchmark of 90% on the default branch; The average success rate was 77%.

“While neither number reaches our benchmark of 90%, the pattern of non-default branches having higher numbers of failures indicates that teams are utilizing effective branching patterns to isolate experimental or risky changes from critical mainline code. And while success rates haven’t moved much over the history of this report, recovery times have fallen sharply. This is a  welcome sign that organizations are prioritizing iteration and resilience over momentum-killing perfectionism,” according to the report. 

And finally, throughput — which is the average number of workflow runs on a given day — is used to measure team flow because it tracks the units of work that are moving through the CI system. The industry median was 1.52 times per day. 

CircleCI did note that throughput isn’t necessarily a measure of quality of work, so it’s important to consider it alongside the other performance metrics to get the full picture. 

“Far more important than the volume of work you’re doing is the quality and impact of that work. Thoroughly testing your code and keeping your default branch in a deploy-ready state ensures that, regardless of when or how often changes are pushed, you can be confident they will add value to your product and keep your development teams focused on tomorrow’s challenges rather than yesterday’s mistakes,” the report wrote.