To ensure websites and applications deliver consistently excellent speed and availability, some organizations are adopting Google’s Site Reliability Engineering (SRE) model. In this model, a Site Reliability Engineer (SRE) – usually someone with both development and IT Ops experience – institutes clear-cut metrics to determine when a website or application is production-ready from a user performance perspective. This helps reduce friction that often exists between the “dev” and “ops” sides of organizations. More specifically, metrics can eliminate the conflict between developers’ desire to “Ship it!” and operations desire to not be paged when they are on-call. If performance thresholds aren’t met, releases cannot move forward.
Sounds simple and straightforward enough, but you’d be surprised at how challenging the SRE role can be, given basic human psychological tendencies. Our desire to see ourselves and our teams in a positive light, and avoid negative consequences that can result in our subconsciously gaming, distorting, and manipulating metrics.
Consider the following scenario: A release is behind schedule. Developers are in a rush to roll out the new features customers have been requesting. Being first to market with these features will be a huge win for the company. They look at performance data — the render time is 2.9 seconds, with the new features. Without the features, the render time (defined as, the time elapsed between a user’s request and the moment when content appears in their browser) is 2.6 seconds. That’s only a 0.3 second difference; they say, “Users won’t notice. Ship it!”
On the other hand, the operations team has been dealing with a number of high priority incidents as a result of a recent release. They want to avoid being paged for issues that could have been prevented by slowing down the pace of releases. They look at performance data for another application and see the load time (defined as, the time it takes for the application to be ready for user interaction, i.e. typing in text fields) is 3.3 seconds. That’s 0.3 seconds above the generally accepted 3 second threshold; they say, changes need to be made before this is rolled out.
Even with performance metrics in place, there is a vast difference in how the situation is viewed due to biases. Let’s examine why these biases exist and how an SRE may help to reduce bias.
Why bias seeps in
Before you stop reading because you aren’t biased, take pause. Bias isn’t bad. It helps our brains solve four problems:
- There’s too much information. The volume of performance metrics collected by organizations is staggering. Lyft gathers 20 million metrics per second. Slack stores 90 TB of metrics data. As a result we tend to only notice changes, bizarreness, repetition, or confirmation.
- There’s not enough meaning. Context helps us understand and remember information. When context is missing from data we fill in gaps with known patterns, generalities, or our current mindset.
- There’s not enough time. When an incident occurs, the race is on to resolve it as quickly as possible. Time is of the essence so we assume we’re right, we can do this, and easier is better.
- Memory is finite. Our brains can only remember and recall so much. We save space by editing memories down, generalizing, and relying on examples.
Types of biases
Three common biases influencing metrics are anchoring bias, confirmation bias, and narrative bias. Anchoring is the tendency to rely on a single piece of data, usually the first piece of information we encounter. With thousands of metrics being collected, and deadlines to be met, there is too much data to process. Instead we find a single piece of data and create an anchor. In our scenario the developers saw the performance decline was only about 10 percent. They remembered studies saying users won’t notice a change less than a 20 percent difference. Operations used a different anchor. They looked at best practices stating user engagement drops when load times exceed 3 seconds. These different anchors lead to two contrasting attitudes around the significance of 0.3 seconds.
SREs can help create consistent anchors, based on what makes sense for the business instead of different teams working off different anchors. Do research, benchmark similar sites to see how they perform, and decide what an acceptable anchor or threshold should be.
Confirmation bias is the tendency to interpret and search for information confirming a pre-existing hypothesis. Instead of looking at all the data, we focus on the data that confirms our hypothesis. The dev team looked at application render time which tends to be a faster metric and confirmed their desire to roll the release out. The ops team looked at application load time – a metric which tends to be slower than render, and confirmed their desire to delay. We want our teams and our company to be perceived as valuable and successful. We look for metrics that prove our case, even when these metrics may not be actionable or paint the whole picture.
Using two different metrics, render vs. load, both teams found “proof’ to support their viewpoint. Having an SRE define which metrics and measurements will determine go/no-go status will reduce confirmation bias from seeping in. In most cases, it should be those metrics that most accurately reflect the user experience.
Narrative bias is the tendency to assume information must be part of a larger story or pattern. Humans prefer stories and anecdotes over raw empirical data. Stories are easier to remember and provide context. When there is meaning to information it is easier to recall. Let’s say a page loads in 3.2 seconds, and has a higher bounce rate than pages that load in 3 seconds. Narrative bias dictates this must be due to poor performance. However, it’s just as likely a result of a random variation as it is due to poor performance. We like to believe there is a cause rather than events being random; it gives us a sense of control. If there’s a cause, it can be corrected; if not, we are not in control. As a result, DevOps teams often end up wasting time chasing down problems that do not exist. It’s up to SREs to remind others that random events can and do happen.
SREs must assume the role of adjudicator
Even with a multitude of performance metrics at their fingertips, the SREs role is not so simple. It is not enough to just present metrics. Metrics need to be presented in a way that best points us all to the truth, guiding decisions on what is best for the business.
To accomplish this, SREs must be aware of biases and be methodical about overcoming them. Position metrics within the context of neutral industry information. Emphasize metrics that matter – which increasingly are those demonstrating how the real user perceives performance. Metrics that do not convey this tend to be “vanity metrics” – those that may make people feel good, but are really void of meaning where user performance is concerned. Once software is in production, help others avoid drawing hasty conclusions when variances are detected. It’s not necessary to assume something is wrong unless clear patterns are observed in the data.
Perhaps most importantly, avoiding bias means acknowledging it exists and everyone, yes even you, is biased to some degree. Failure to see yourself as biased is itself a bias – the bias blindspot. DevOps teams must foster an environment where questions can be asked in a positive and constructive manner. It doesn’t matter whether the question is raised by a junior engineer, senior engineer, or a non-engineer. Encourage dissenting viewpoints and impress upon team members the importance of considering and accepting answers – even those they aren’t necessarily seeking.