Mobile apps are a necessity for companies of all sizes, and apps are getting more complex all the time. That along with the dizzying array of devices requires a well thought-out mobile testing strategy. And it will involve a bit of risk/reward analysis.
Mobile apps come with inherent risks. For usability, compatibility and responsiveness testing, what might be considered a minor issue on a laptop could be critical on a mobile device. People are generally hurrying, multitasking and have limited time and attention spans when using mobile devices, so it’s not just bugs in apps that aren’t well tolerated. Buttons, menus and forms that are easy to access on a desktop can be small and frustrating to use when resized for mobile. Testing too many devices creates unnecessary expenses; too few devices risks lost revenue from app abandonment. However, taking time to understand the device ecosystem and the customer the application is designed for will enable creating a test strategy that will balance risk and return.
The diversity in devices, operating systems and screen resolutions makes determining the right mix of devices to test complicated. A little basic data analysis will provide a lot of insight into determining the best device matrix. Three manufacturers account for 80% of devices used in the U.S.: Apple (43.5%), Samsung (28.7%) and LG (8.2%). Using that information and looking at specific target demographics can give a pretty good composite picture of the devices predominantly used by them (which will provide insight into the operating system version), and hence which ones to focus the majority of testing on. Also, the product type (such as business vs. consumer apps or games) will influence the target devices.
(Related: How mobile testing works in an IoT world)
After identifying the device matrix, there is also the option to use a mix of emulators and real devices. The testing implications of when (and when not) to use emulators vs. real devices are large and complex; hardly anyone would argue that nothing takes the place of testing on actual devices. Holding the device is everyone’s wish. Seeing page load and performance issues on the real device is the most efficient, but we know we can’t physically test every device. Usability testing on emulators and browsers with any extensions is getting better, but won’t always represent what will be seen on the actual device. Emulators can be good for testing new functionality or a new component design, and they have some advantages over using actual devices. Logging faults and capturing screenshots are much simpler when working from a desktop, and some conditions that are hard to duplicate on real devices, like low battery power, are easy to simulate.
Emulators also tend to be slower than real devices. Depending on what type of app is being tested and whether tests are manual or automated can limit testing on emulators. Native apps talk directly to the operating system, while Web apps talk to the browser, which talks to the OS. The more layers there are, the slower the response time. By being aware of the limitations, selective use of emulators is an option to increase test coverage with minimal cost.
Normally it is not practical or cost effective to conduct full testing or full functional testing on multiple devices. A practical approach is running a full set of tests on one or two primary devices, and then running the smoke test on additional devices to identify any obvious issues. However, it depends on the nature of the application. If the app is cutting-edge and can possibly stress the device’s capability (processing power, memory, GPS, or other device-specific hardware), then more extensive testing is in order.
One thing to keep in mind when running basic tests is that most handheld mobile devices give priority to the communication environment. For example, an incoming phone call always receives priority over a running application. This makes it important to test the various events and the OS’ multitasking ability.