Executives understand that mobile application quality matters. They realize that employees and consumers have a wide range of choices, and that they are willing to abandon apps at the first sign of poor performance. They may even know that 22% of apps are only used once, and 60% of users who don’t return to an app within seven days are gone forever.

Forward-thinking executives consider the effects of quality on brand image, employee productivity, and even governance and compliance. Still, while these are precisely the issues that concern upper management and the board, application quality is rarely a board-level key performance indicator.

(Related: Flexibility is necessary for Big Data analytics)

The reason is simple: Most executives have no visibility into the actual quality of the apps their teams deliver, and when they do, it is only anecdotal. Features and expected delivery timelines (e.g., “three weeks to multi-user support”) are integral to business planning, yet quality metrics for those features are not reported. There is a binary checklist in which features are either present or absent, when it is the quality of those features that will make or break mobile success of failure.

American Airlines discovered the consequences of poor mobile quality on its app for pilots firsthand when a glitch delayed flights for several thousand customers. The airline was able to triage the problem, but not before the app crashed entirely, damaging the company’s reputation and bottom line. For eBay, just an hour of downtime for their mobile channels represents more than US$3 million in commerce volume.

The solution: Quantifying quality
The answer is straightforward: Executives and board members should insist on a stream of concrete, repeatable data regarding mobile quality to inform business decisions. A good deal of quality is based on user perception, so the “quality metric” may never reach 100% quantification. but businesses can make enormous strides in the right direction by following six steps.

Step 1: Monitor app performance over time. One board-level quality metric that is easy to measure and has a big impact on an app’s success is performance. Sixty-one percent of users expect apps to launch in four seconds or less, and nearly half of users expect in-app responses in less than two seconds. Failing to meet those expectations is costly. Fifty-three percent of users have uninstalled or removed a mobile app with severe issues like crashes, freezes or errors, and 80% will only attempt to use a problematic app three times or fewer. This is true for consumer apps when a competing app is just a swipe away, and for enterprise apps when busy employees will abandon or find ways (often not sanctioned) around slow apps.

Trended over time, performance data is an excellent high-level measure of whether the overall user experience is getting better or worse.

Step 2: Test the true user experience. One Android or iOS device is not the same as another. With greater network variance, a wider range of screen sizes, and a greater variety of input methods than the desktop, mobile app quality is even more critical—and something mobile teams can actually completely control.

The only way to understand the true user experience you’ll deliver is to test on real devices. Simulators can provide a quick “sanity check,” but they are only an approximation of a target device, and as software applications, they have their own set of bugs and limitations. Design a test suite that measures all user paths, and test on every target platform that is accessing your app (made possible by automation—see Step 3).

To plan future strategic moves and investments, provide board-level metrics that show pass/fail rates for tests on devices that represent the majority of market share for your audience.

Step 3: Automate testing. Automated testing scales up quickly and affordably, enables broad on-device testing coverage that is not practical with manual testing, and creates the predictive modeling and historical analysis necessary to inform business decisions. It enables manual testing to be more strategically deployed on the subjective aspects of quality while automation generates the steady flow of board-level quality metrics.

Important test metrics to watch include:

  • Memory usage
  • Frequency of tests run
  • Pass/fail rates as described in Step 2
  • Time to resolution for failed tests.

Trended over time, these metrics provide executives with the data necessary to plan future strategic moves and investments.

Step 4: Integrate end-user data. Freezes, crashes and sluggish performance are the three most common reasons users delete mobile apps. Recognizing and resolving these issues as quickly as possible is essential.

Automated testing provides a solid application quality baseline by comparing real-world performance to expectations, but the users “in the wild” may use the app in unexpected ways. UX-focused product owners incorporate end-user analytics and app monitoring solutions to gain a better understanding of the ways in which users are actually interacting with an app and to catch and resolve unexpected crashes and issues quickly. Combined with test data, user analytics and monitoring data provide a 360-degree view of actual application experience.

Crashes and mean time to resolution should be board level-metrics, alongside downloads and revenue.

Step 5: Embrace quality throughout the company. Agile development and QA organizations have embraced quality metrics for years through public dashboards (or “information radiators”) reflecting project status and health. With a steady stream of automated reporting available, other business units can take advantage of quality metrics as well. Call centers can use them to predict support volumes and staff needs. Sales teams can point to high-quality metrics as a competitive differentiator. Perhaps most important, executive management can incorporate quality considerations into strategic planning. Quantified quality greatly enhances the accuracy of estimating the cost of adding or improving features, the risk of expanding to new platforms or geographies, or any other product-dependent considerations.

Highly visible quality dashboards create a culture of team responsibility and transparency, and when the KPIs used to measure personal and group performance are expanded to include quality benchmarks, teams are informed and motivated to focus on one of the most important factors in mobile success.