For many companies, digital transformation has been happening slowly over the years. But this year, the COVID-19 pandemic has forced companies to transform faster than before, or risk getting left behind. Brick-and-mortar stores needed to create digital storefronts, restaurants needed to invest more heavily into amping up their online ordering operations to support delivery and takeout requests, and schools needed to learn to adapt to remote learning.

And with everyone stuck at home, digital services were more in demand than ever before. Online banking, streaming subscriptions, and e-commerce are just a few of the industries that saw increased usage since the start of the coronavirus outbreak.

Even after the pandemic is over, the impacts will be permanent. Research from McKinsey indicates that 75% of people that are using digital channels for the first time will continue to use them even when things return to normal. 

It’s clear that in order for these businesses to survive, they need to make user experience a top priority. With so many competitors and alternatives for consumers to pick from, providing a poor digital experience can result in lost business. 

There are a number of ways that businesses can monitor those experiences and make changes based on what they’re seeing. 

It’s important to make the distinction between APM, observability, and digital experience monitoring. While all of these are important to monitor, they all have different goals. The main difference between APM and observability is that APM is reactive, while observability is proactive, Wes Cooper, product marketing manager at enterprise software provider Micro Focus, explained in an Observability Buyer’s Guide on ITOps Times. With observability, you are looking into the unknown and using automation to fix problems, whereas with monitoring, you are identifying known problems, Cooper explained. 

Digital experience monitoring, on the other hand, is a form of monitoring that focuses on the user experience. With digital experience monitoring, development teams are looking to determine whether they are providing their users with a good experience. This allows them to identify issues in an application that may be impacting experience, which results in happier users. 

Both observability and APM are closely tied to the systems in an organization, while digital experience monitoring is more about the applications themselves.

It might seem that APM and digital experience monitoring would be done by separate groups, with system admins being responsible for APM, and product teams monitoring user experience. But Daniel Cooper, managing director at automation and digital transformation company Lolly Co, believes that in order to be successful, application monitoring needs to be done on top of system monitoring. “Essentially, you need to first establish your observability pillars before implementing your application monitoring strategy. The result of doing so is a monitoring ecosystem that will help you concentrate on customer experience issues and not the health of the application,” he said.

Tal Weiss, CTO and co-founder of code analysis tool OverOps, added that monitoring in general has been shifting from the responsibility of the operations teams to that of the development team. 

“Observability, infrastructure monitoring and APMs have traditionally been key tools in the hands of Ops teams when it comes to assuring the performance and uptime of key business applications,” said Weiss. “This was well-suited to a world in which an application’s quality of service was primarily determined by its infrastructure, and the performance of underlying system components such as DBs, web servers and more. But as code has begun eating the world of software, and infrastructure itself has become an API, the focus shifts towards the quality and reliability of the application’s code itself.”

The three levels of digital experience monitoring
According to Zack Hendlin, vice president of product at OneSignal, a provider of push notification services, there are three levels of user monitoring. 

The first level is measuring core systems like databases, CPU load, memory load, memory used, disk space, queue lengths, API uptime, or error codes returned. These metrics help to cover the basics of knowing whether or not your application’s basic infrastructure is working. This can be measured using a tool like Grafana or something similar, Hendlin explained.

The second level is monitoring what users experience. According to Hendlin, this includes things like page or app load times, drop-off at points in an app, and crash logs.

The third level — and most interesting, according to Hendlin — is measuring what your users are doing. According to Hendlin, this measurement can help answer these questions: “Are they skipping your onboarding flow? Inviting friends to your app? Posting content? How long do they spend in your app? When do they purchase?”

Hendlin continued: “At OneSignal we’ve focused on building the set of analytics that makes it easy to keep track of sessions, session duration, clicks and more — out of the box. And then we built the ability to track custom outcomes — which can be any action a user takes in your app. We see our users tracking when their customers redeem coupons, share positive / negative feedback with them, purchase (and the amount), follow a new musical artist, and more.”

How to gather feedback
There are a number of ways that a team can tackle monitoring those user experiences. One is by gathering user feedback through the users themselves. According to Hendlin, information can be gathered from user reviews, support questions, user research sessions, or conversations with customers. “With hundreds of questions a day, we keep a pulse on what people are asking for or where we could make parts of our product easier to understand. We aggregate these support conversations and share common themes to help the product team prioritize,” said Hendlin. 

“Really understanding users comes from talking to them, observing how they interact with the product, analyzing where they were trying to do something but had a hard time, and seeing where they need to consult documentation or ask our support team,” said Hendlin. “There was a Supreme Court justice, Louis Brandeis who said ‘There is no such thing as great writing, only great rewriting’ and working on building a product and improving it is kind of the same way. As you get user feedback and learn more, you try to ‘re-write’ or update parts of the product to make them better.”

There are also tools that can be used to measure the technical components of digital experiences, such as latency or error rates. “Indicators like response time and error rates help to assess how easily (or not) customers are able to navigate an app,” said Eric Carrell, DevOps engineer at API company RapidAPI. “Application performance can also be measured by tracking app traffic and figuring out where demand spikes or hits a bottom low.”

According to Carrell, tools used for monitoring take advantage of protocols like TCP and WMI to gather information. They also use SNMP polling data to monitor things like usage patterns, session details, and latency. “As a DevOps engineer I can easily diagnose issues with data on all transaction parts using tools that help me visualize end-to-end transactions,” said Carrell.

In addition to measuring things like latency or error rates, product teams can use tools that are designed specifically to monitor how users are actually interacting with software. According to OpsRamp product manager Michael Fisher, Pendo, Heap, and Mixpanel are examples of tools that do this. “These tools generally give insight around product adoption, usage and critical paths users take in the application,” said Fisher. 

A new breed of monitoring tools is arising
According to Fisher, many APM tools are actually starting to incorporate features that allow for monitoring digital experiences. Usually this comes in the form of synthetics and helps provide a baseline of the paths that users take to complete a business transaction.

“A new breed of tools that leverage machine learning and elements of AI is emerging in order to provide a deep and dynamic understanding of application code as it’s executing, not just the infrastructure on which it runs,” said Weiss. 

Examples of such types of tools include feature flags, AIOps platforms, and dynamic code analysis. These tools help developers understand when, where, and why business logic breaks, and then connect that context to the developer who initially wrote the code, Weiss explained.  

“It’s by creating this 3D view of infrastructure, system components and application code that an innovative, continuously reliable customer experience can be fully delivered,” said Weiss. 

Challenges to digital experience monitoring
There are, of course, a number of challenges when it comes to monitoring digital experiences. One is that it can be difficult to define success. Fisher explained: “An application that is simply running, without users adopting, may be construed as successful. Conversely, an application with user adoption but poor performance may be seen as successful as well.” Fisher believes that an application can be considered successful when users are adopting it, it’s solving a real problem, and it is functioning as it should. 

Hendlin agreed that deciding on the right metrics can be a big challenge. For example, measuring the time a user spends in an application, and wanting it to be high, makes sense for a game or social media app, but for a bill-paying app where users want to get work done quickly, you would want the average session duration to be as low as possible. 

According to Hendlin, thinking through key touch points will help you define what metrics to track and instrument. For example, if you’re in e-commerce, you’d likely want to look at outcomes like coupon redemptions, add to cart actions, purchases, time browsing on the site, or subscribing to a newsletter. A social media app would likely have key metrics like posts viewed, content posted, videos watched, comments made, ads revenue generated, and new friends made. 

Another challenge, according to Lolly Co’s Daniel Cooper, is getting stakeholders involved and on board. “Sure, you might have an idea of how to set up a comprehensive monitoring strategy, but it takes time, work and willingness to abandon the old way of doing things,” he said.