The release of the ChatGPT and Whisper APIs this month sparked a frenzy of creative activity among developers, allowing many companies to build generative AI capabilities into their apps for the first time. Numerous businesses have rushed to add generative AI features to their products, including Salesforce, HubSpot, ThoughtSpot, Grammarly, and others.

This provides a great way for businesses to differentiate their software – but only if the user experience is a good one. The generative AI function is at the core of the new offerings, but also critical is the overall end user experience with the app. If you throw a quickly-developed new app in the hands of your users, how can you be sure the performance will be optimal?

For a reminder of the negative impact a poor user experience can have, just recall the problems Ticketmaster had in November when Taylor Swift’s concert tickets went on sale. Fans of the pop idol (including me!) swamped the Ticketmaster website and brought the system to its knees, resulting in frustrated customers and a wave of bad publicity for the company.

Ticketmaster may be on a whole other scale to your new generative AI feature, but the principle remains the same: If you’re releasing something people are likely to get excited about – which hopefully applies to everything you build – the user experience is critical. You need to be ready for a spike in demand if it proves popular, and be able to track the performance in real time to ensure the experience is a positive one.

Fortunately, developers can mitigate these problems with the right planning and practices in place. There are key performance metrics that developer teams should monitor at all times, especially in times of peak traffic, to avoid application crashes, long wait times — and burn-out for the teams who have to fix these problems when they arise.

Here are the five performance metrics developers should track for their applications. Whether it’s a brand new fancy AI product, an updated website or just a core application for your customers, these metrics are always important:

Web Vitals:

The Largest Contentful Paint, or LCP, measures the load speed of a webpage. Having a fast LCP is a clear sign that a customer’s experience is optimal as they toggle between pages. In tandem, the Cumulative Layout Shift, or CLS, scores how a user is experiencing unexpected layout shifts. This plays out on busy shopping days: typically, these pages have ad and sales notifications that affect a brand’s main page layout, potentially causing shoppers to experience unexpected shifts. This can reduce their ability to shop and becomes a hindrance if a developer cannot quickly access and address the issue.

Error Count vs. Error Rate:

Error counts typically increase alongside site traffic as more people flock to a website— that’s a natural occurrence. The error rate is more telling because it reveals if a greater proportion of users are experiencing issues with your application or website. Developers who see an increase in this metric should investigate and take action.

Mobile Monitoring:

More and more consumers are using their mobile devices to shop and manage all aspects of their lives. For example, mobile accounted for 45% of sales during the holiday shopping period last year from October to December. Mobile performance metrics are critical to understanding what is happening on your users’ devices during these busy times. Vitals like frozen and slow frames, or cold and warm starts when an app opens, provide visibility into how fast views are loading for your users. As became obvious with the Taylor Swift incident, a slow shopping experience can lead to an outcry and a lot of upset customers.

Outside of vitals related to mobile performance, it’s important to monitor mobile changes through application release health. Developers can see this in the rate of crash-free sessions and crash-free users as traffic to an application increases – which helps to identify abnormalities in the overall health of the app.

Slow Database and HTTP Ops

During sustained traffic spikes, database queries and HTTP requests that take too long to execute harm the user experience. Hitting a slow checkout process, or seeing the dreaded spinning beach ball after adding a product to the cart, confuses shoppers. They don’t know if they should refresh the page, and often they will abandon a purchase completely.

What’s more, if a developer is working on a backend framework, slow database and HTTP ops could also be a sign of an N+1 query problem. At that point, an application is making database queries in a loop and causing performance issues for the purchaser.

User Misery

A homegrown metric we use at Sentry, tracking User Misery helps developer’s understand a customer’s experience with an application. It is a ratio of unique users who have experienced load times at 4x a configured threshold in a Sentry project, and thus serves as a proxy for customer frustration. With this User Misery score, our developers can see which transactions have the highest negative impact on users and prioritize fixing them.

Providing a smooth end-user experience is table stakes for businesses throughout the year, but especially during high-pressure, culturally-driven spending moments. In any of these scenarios, developers behind the scenes often work long and odd hours to address performance issues and ensure the most seamless experience. You can’t prepare for everything that can possibly go wrong, but you can be equipped with the right tools and metrics to quickly identify and remediate any problems that occur.