With agile and DevOps becoming the new normal, mobile testing has become increasingly complex. The releases come at a rapid-fire pace, and the stakes for mistakes are high. Since the Internet of Things has moved forward, the complexity of devices has also increased. Now, smartphones need to communicate with appliances and televisions need to communicate with tablets, creating a shift for mobile testing. Fast, effective testing is still a must, but QA managers and testers need to think about mobile quality throughout each release.

Like other software testing, companies are told to test early and test often, but mobile testing is unique and comes with new challenges. Apps need to be tested for functionality, usability and performance, and then tested on different devices with different operating systems, as well as different network environments like offline versus online.

The possibilities are endless, and according to James Lau, product manager for Android Studio at Google, considering all of those factors is the best way to get started with a testing strategy.

“Combine all of [these challenges] with the market pressure to ship and iterate quickly, mobile app testing is no easy task,” said Lau.

(Related: Five myths about mobile automated testing)

There is also the market pressure to have good, easy-to-use mobile apps as a way to compete with other businesses. In a sense, the businesses of today have become software companies because they all have some kind of mobile app. Some companies don’t even have physical locations where customers can stop by, like Yelp or Uber. Instead, they only provide experiences through their application.

Today, 25% of users in the United States abandon apps after one use, and only 34% of users open an app 11 times or more, according to a Localytics study. With this in mind, Ken Drachnik, Sauce Labs’ resident mobile expert, said that if your entire business depends on an app, these statistics are a “pretty stiff penalty to pay.” Making sure your app looks good and behaves well on all devices, and making sure companies are doing adequate amounts of testing, is what he says is the only way to be competitive.

There isn’t an end-all solution when it comes to testing mobile, as each challenge is unique and could vary depending on the company. There are a variety of testing options, but each comes with its own set of pros and cons. Finding a balance between the methodologies of mobile testing is just one step in the right direction.

The mantra of mobile
Testers and developers have been hearing “test early, test often” for quite some time. Ever since the adoption of agile and DevOps, it’s become a sort of mantra for any type of software testing, specifically mobile testing. With the rapid release cycles and organizations updating their apps every day—sometimes a few times a day—it means more testing is required.

It’s becoming true that for testing, the processes are shaping up around Continuous Delivery, and this affects all forms of software, according to BlazeMeter chief evangelist Michael Sage. He said that mobile testing in a “test early, test often” manner has its clear benefits, whether it’s a small team or a fully automated pipeline.

Automation is just one part of test early and often. These tests that are being done in a “rapid fashion” could just be making bug fixes or incremental changes, according to Genefa Murphy, vice president of products and partner marketing and application delivery at Hewlett Packard Enterprise. Monitoring the mobile applications once they are out of production and understanding how the users are using the app is another way to prioritize testing efforts.

Drachnik said that the rule needs to be to test often—very often. Testing should be performed on every build so that companies know their app is “perfect” and will render properly on every device and browser that you can see your users using the app on. With all of this frequent testing, Drachnik said that companies are doing agile development, whether they have adopted agile or not.

For those companies looking to use agile software methodologies, it will require some thought as to how and when testing will be deployed. Matt Johnston, chief strategy officer of software testing company Applause, said that a challenge he has seen is that the company, the development team and the product team are getting more agile, and this means weekly or daily releases. This in turn increases the chances of something going wrong and for users noticing mistakes and bugs.

To keep up with this demand, not only has this shift evolved into creating new testing profiles, but it also has shifted the methodologies of testing, which in turn has created new jobs and new tools for faster testing. Eran Kinsbruner, mobile technical evangelist at Perfecto Mobile, said that to keep up with the market and demand for velocity is to completely shift your methodologies. He said that the previous skillsets and tools that were used in Web or desktop applications are no longer sufficient, as more QA and agile teams are collaborating and using open-source tools for mobile testing.

Automation and emulators
If companies are looking for a “one-stop-solution,” then they are missing the complexity of mobile testing, Johnston said.

A question that companies typically ask is whether they should use automated or manual testing for their mobile testing strategy. Automation requires writing scripts to perform “x, y and z” tasks and run on devices at hand, and manual tests still require a human to act as the end user to test out features of the application.

Johnston said that companies should not pick one form of testing over the other, but instead think of mobile quality as a portfolio to determine what combination of testing is best.

“[Mobile testing] is such a complex animal, with so many variables,” he said. “Companies should ask, what parts of it should be automated, what parts can be exploratory, what kinds can be test-based driven, and what can be in the lab or in the wild?”

There are a variety of scenarios in any Web or mobile application that testers could automate, including the most tedious manual test cases, or those that are easiest to automate. These cases tend to be more mature and do not see significant changes to the code. Johnston said that if a company is testing something like a search box in a shopping application that needs to be changed several times, it’s not a good option for automation.

He gave the following scenario: Say there is a version of a search engine that is going to be changed dramatically, and it uses test automation. Every time new code for its search functionality is written, the tester would need to go back and update the test automation suite. He said that this would slow the tester down, and they will have to write new code every time the engineering team changes the feature functionality on that part of the app.

The right questions to ask are what parts of the code or which parts of the apps are good automation candidates, so time and money is saved.

“When I see companies that say, ‘We are going to automate our testing,’ they do really expensive licensing and then they have two to three engineers working on nothing but test automation,” said Johnston. “Then they say, ‘Did I spend a million dollars on test automation?’ It’s not a technical risk, but it is a financial and value risk.”

Automation is an ideal testing strategy, but not every company has the skills, money or capacity to do it, according to HPE’s Murphy. Some companies are not quick to do automated testing or build automated scripts. She said that she is still amazed to see the amount of manual testing chosen over automation, and that embedded automation should be part of the mobile quality mantra.

“We think about the ability to be able to test in the real network conditions,” she said. “Doing that in a manual way is time-consuming so automation really matters.”

In an agile development environment, there is no way to test on all different types of devices in a manual way, according to Drachnik. He said that manual testing was fine when mobile apps just came out and users only had a few apps on their phone. Now, with users deleting apps daily, the only way to get feedback is to get an expanded test matrix across a broad number of devices, browsers, and operating systems. The way to do this is to automate, he said, because there is no way to do all of this manually and keep up with other companies.

Drachnik recommends running a lot of tests in parallel across operating systems and browsers to create this broad test matrix. He also said that emulators and simulators allow you to smoke-test your apps, making sure the business logic works, and that everything displays properly. Then, testers can switch to real devices and have them do the network, experience, sound or WiFi testing.

Later on in the testing cycle is when testers should turn to real devices, and whether you are testing on simulated or real devices, you want to “run a lot of tests,” according to Drachnik.

Johnston said to be aware that when using an emulator or simulator, the results are only as good as the emulator. If the emulator represents all of the details of the device, then those are “good predictive results.”

“The problem with emulators and simulators is they tend to be a decent simulation on that device, but there are so many variables,” said Johnston. “[Emulators] will give you the ‘Yes it works,’ but then when you launch it in the hands of someone with that exact device, it will have all sorts of problems.”

Challenges of mobile testing
The testing tools of “yesteryear,” as Sage says, were designed for a different type of load testing that was used for long-release testing, which is not going to work for the fast releases of mobile testing. Old tools and methodologies need to be eliminated in order to have a successful mobile testing strategy.

Mobile testing requires some sort of level of sophistication, according to Lau (although this varies among mobile app developers). He said that many developers still rely on manual ad hoc testing with whatever devices they have. Ad hoc testing is usually only done once a defect is found, and he said that the big risk here is that “as the app grows with more features and as more people work on the app, ad hoc testing doesn’t scale.

“If there is little or no automated test coverage, the app will become more difficult to develop over time since code cannot be changed with confidence. This translates to real business risks as app quality cannot be guaranteed, new features cannot be added as quickly, and customers may become dissatisfied.”

Lau said that at Google, developers are urged to invest more time in mobile testing to ensure that the app quality is good and the end users remain happy.

Companies should stay away from ad hoc testing and start looking at open-source standards since most of the tools used today in mobile testing are in fact, open source, he said. This includes tools like Calaba.sh, Robotium, Selendroid, Appium—to name a few. Open-source frameworks are ideal because then testers are not locked into a specific tool. “If you write these automated tests in your facility, it’s very easy to point those tests [to the cloud] and run those tests on a very scalable cloud with the capacity to parallelize your tests,” said Drachnik.

He added that these open-source tools are backed by large communities that are there for assistance, and some of these open-source companies will hold local Meetup events to talk about mobile testing strategies.

A challenge that Amanda Silver, partner director of program management at Microsoft, said she hears developers express is that the device “ecosystem is so fragmented,” and developers can’t afford to test across all the devices they need to support. Things like variable connectivity, conditions of device sensors and location sensitivity are all challenging to replicate in-house.

“We even spoke with a developer who had glued a variety of the devices he needed to support to a giant sheet of particleboard that they rolled in with a dolly whenever they needed to do on-device testing,” said Silver. “Targeting mobile continues to be thought of as a specialty.”

Making tradeoffs is part of the mobile testing challenge, according to Google’s Lau. Each set of tests has to have sufficient coverage, and the devices or device images that are chosen need to represent the target market.

“It’s impractical to write tests to cover every single use case, condition, exception, etc. and to run every test on every single combination of hardware, API levels, form factor, [and] language,” said Lau.  “So consider what to test and what you want to test on carefully.”

Lau said that a mistake that testers make is assuming that users have devices like the testers, and also have the same type of network connectivity. He says this is especially true if the target markets include developing countries. One way of making sure the testing environment is representative of those markets is by testing against the device profiles that match devices from them. Testers can also use emulators to simulate degraded network connectivity, he said.

However, figuring out which testing methods or tools a company should implement is half the battle. The other challenge is managing a cultural change, as the nature of mobile testing and the way in which mobile apps are architected is different. According to HPE’s Murphy, it requires a change in skillset and a shift for certain teams. “If you want to test fast and test early, it does maybe require changing the skillset of your QA organization to become more of a dev-test organization. It’s the notion of shift left: bringing some more of the testing into the development phase as well,” she said.

Without cultural change, experts like Murphy say that regardless of the tools and technology you implement in your testing strategy, “You will not be successful,” she said.

Murphy also suggests incorporating analytics tools so testers can monitor mobile applications and know when to prioritize testing efforts. When developers are thinking about building an app, they need to also think about how they can instrument it as a way to capture analytics to put back into production. This creates a feedback loop from the production team to the development team, and it’s also where DevOps and mobile testing overlap, she said.

Other companies use resources that can understand user behavior, or they survey their own customers to see their challenges with an application, and then figure out if they can solve the problem directly.

There are plenty of other risks considering automation and Continuous Delivery, and some of these risks relate to security and the environment that is being tested. Perfecto Mobile’s Kinsbruner said that from an automation perspective, automating mobile devices in a particular environment exposes the testing regimen. Incoming events could impact testing, like if there is an issue overnight and the testing process stops. The network and environment could change or there could be battery constraints. Kinsbruner said that the environment is complex, so you should “expect the unexpected.”

Mobile testing and the IoT
The interconnectivity between our mobile devices and other technologies brings along more data, more code, and more projects. The amount of devices available continues to increase, and now these devices communicate with each other and with other applications, increasing the complexity of mobile testing.

Some experts think that the IoT will cause a radical shift for the way testing is done. In the past, testing applications on devices was a “siloed” endeavor, according to Johnston.

“Now we are getting into where your phone needs to work with your home security system, your tablet needs to work with your home TV or entertainment system, and your car needs to be remote-started by your phone,” he said. “You are introducing an order of magnitude, and more permutations.”

For QA managers that are in the lab, they will need to look at all the devices and possibly test the app on 12 vehicles that need to be able to communicate with the smartphones. Johnston said the issue here is that most CFOs are not signing off on their teams buying cars or smart TVs to put them in their labs, making it a challenge for the notion of lab-based testing and automation.

Consumers will expect these devices to work, without bugs and without other issues that come with mobile devices. BlazeMeter’s Sage said that the lines between digital and physical are “blurring more and more,” and this means consumers expect their digital experiences with mobile devices to be “seamless,” every time.

Sage also said that IoT is going to make mobile testing complex because it will “unfold this really huge matrix of possibilities,” including the growth of different types of interfaces. While there is no need to test the UI of a smart refrigerator, it is important that it can communicate with the back end. With so many types of devices, the data they transmit, and the wide range of traffic patterns, tests will have to run longer so the data from the back end can communicate reliably, according to him.

Murphy said that IoT is bound to have an impact on mobile testing. She said testers have to consider three factors: speed, quality and scale. Then they need to consider how to increase speed without compromising quality. With mobile testing today, testers have the ability to virtualize network conditions and services when, potentially, they are not available when actually building or testing the app. Virtualization simulation will only increase with the IoT, she said.

Many of the same paradigms in mobile can be reused for IoT devices, according to Microsoft’s Silver. “IoT is a huge area of computing,” she said. “Many devices that we now call IoT devices are really embedded experiences that run things that look very much like mobile apps.”

Silver said that depending on the supported sensors, many of the APIs are the same on mobile devices as they are with IoT devices. Microsoft is a member of the AllSeen Alliance, which is the creator of AllJoyn. Companies like Microsoft are continuing to put effort into finding ways to enable the interoperability of all the applications in IoT.

Johnston said that companies have options when handling the complexity of mobile testing in an IoT world. They can slow down, which he said they aren’t going to do. Or, they can “cross their fingers and say it should work.” He said that companies can tackle mobile testing complexities by opening themselves up to new ideas that are beyond the notion of a centralized test lab.