The most challenging principle of Agile is “simplicity — the art of maximizing the amount of work not done.”

Developers waste immense cycles trying to avoid software failure. Rather than defining “good enough” reliability and stopping there, teams go way beyond the point of diminishing returns, building what is jokingly referred to as “gold plated” reliability infrastructure around their software. Fear of failure has instilled an all-or-nothing mindset to software reliability that is the opposite of Agile simplicity.

The reason developers overbuild for reliability — and there are many ways you can over-engineer for performance, uptime, high availability, even security — is that they never really had a way to define “good enough” reliability in the first place. Without a clear picture of success or a clear finish line, developers can’t readily communicate with the organization. The big boss wants 100% reliability, and that leads to eye rolls from engineers.

Eureka! SLOs Forever Changed What I Thought I Knew About Reliability

After Google acquired a company I co-founded, we had to re-platform the entire solution to run on Google Cloud Platform. We got a crash course in the site reliability engineering (SRE) practices that run some of the world’s most popular web services at Google. Once you experience Service Level Objectives (SLOs) as I did, you will never be the same.

SLOs are the heart of Google’s reliability engineering practice. SLOs are a math-based discipline of setting goals that model the reliability of cloud services. SLOs are an evolution of SLAs (“service level agreements”) but more fine-grained and designed for consistent slight overachievement over time that yields happy customers (as opposed to SLAs, whose violation represents a total disaster). Google uses SLOs to make better decisions in managing software and cloud services.

SLOs give developers a way to take the expected behavior of applications out of their brains and codify these outcomes so that product teams can track them.

To be more precise, SLOs take software events and set a mathematical proportion of “valid events” as a percentage of total events. SLOs are mapped to crucial application and system intervals and are representations of user success that you can modify over time. For example, a developer might set an SLO that an HTTP GET request for an API takes less than two milliseconds, 99.99% of the time. Or a developer might set an SLO for a specific user outcome to define success in a multi-part transaction (what’s a “good” success rate for a shopping cart checkout in an e-commerce transaction?). 

Error Budgets are a conceptual model for understanding acceptable risk in your services. The idea is that your team plans for (or “budgets”) some small amount of error that customers won’t notice much and “spend” that budget to make the service better. Error rates rise and fall throughout the day or month, affecting your tolerance for the additional risk. The error budget vividly expresses how to balance many competing priorities across a large-scale software organization. And you can use error budgets to trigger automation and on-call staff proactively as the foundation of your observability stack.

SLOs Fill the White Spaces Between Dev and Ops for Agile Teams

Sometimes it seems like developers and operators speak two different languages. Devs are interested in pushing code, making apps work for users, and quickly getting new ideas to life. The limit of their interest in reliability is whether they are getting woken up when their app breaks. In contrast, operators need to look at the core infrastructure and think about resource utilization, noisy neighbors, broadly applicable services for various apps, and how the overall system orchestrates to deliver constant, consistent, reliable value to customers.

The work not done by the developer creates a need for gold-plated infrastructure from the operators. The conversation not had between dev and ops leads to a disconnect in designing the services (for customers to be happy) and the engineering system (for engineers to be productive). 

The main benefit of SLOs is that you know you can focus resources on your most critical services because you understand what degree of un-reliability is OK for the lower impact areas. Counterintuitively, by lowering reliability goals where it matters less (basically any time the user is likely to blame their internet connection), you can focus on what matters. The work not done by the service in less critical areas allows for increased resiliency — from engineering and cloud capacity — in the essential purpose of our customers, as defined by SLOs.

SLOs are the language for this conversation that’s been missing, and these are the decision points that allows developers and operations to navigate around software reliability:

New Features Versus Technical Debt

Agile teams face a constant decision-point on new features versus settling technical debt that may be causing customers pain. When is the right time to go spend months refactoring code that is not optimized? How do you know you aren’t spending cycles on a problem that currently isn’t even giving users pain? If you have modeled an SLO, you have a clear compass for making that decision.

Complex Workflows Without a Definition of ‘Success’

Many user journeys today traverse multiple APIs and multiple steps within your applications. Password reset is the canonical example that the SLO community calls out to describe why these are problematic to model success in any “five nines” type of way. A password reset is contingent on the user entering their email correctly, the reset email not getting caught in a spam filter, the transactional email systems working correctly, and the user completing the process. You may find that for your “forgot password” system, 70 percent is the general expected success rate. And there are countless examples of this type of complicated workflow where you can’t just look at the metrics of the individual systems; you have to look at the totality of the user experience and define success rates that describe user happiness. SLOs allow you to codify the expected outcome and to adjust that outcome success rate upwards or downwards based on the data you are getting.

Guardrails for What You Can’t Test 

Testing is great, and testing isn’t going away. Unit testing, end-to-end testing, and integration testing all have their place in the software development lifecycle. But the reality today is that you simply cannot test every scenario in dev or staging in a way that will cover all the unforeseen ways that things break in production. SLOs give you another guardrail around your systems in production that lets you see through all the beeps and pages from APM and logging tools to get to the essence of whether your software is behaving in expected ways.

Dependencies Between Teams

A really popular target for SLOs is APIs. SLOs allows you to define the latency or other performance vectors of the APIs you create. If the quota you set is 10ML of latency for your API, but something changes in a system that brings that up to 50ML of latency, SLOs give you a method of seeing which team violates the Error Budget for that SLO. This tradeoff makes SLOs an excellent fit for the “self-organizing teams” principle of Agile, where teams need ways to move independently while also understanding how their work is impacting other groups.

The Work Not Done is Getting Started

I’ve talked with countless organizations frustrated with over-building reliability, but they can’t get out of that mindset because they feel so inundated with critical issues. They’ve got so much technical debt that they can’t stop digging. 

If there’s one thing I recommend to everyone, it’s to start thinking about how to benchmark your users’ expectations based on how your service works today and figure out what truly matters to them. Having wrong SLOs and iterating is infinitely better than having no goals and waiting to decide what is most critical. Better to get ahead of making your service work for users before they vote with their feet. 

Don’t put your customers and engineers in the gilded cage of the reliability work not done or even started.