User experience has always been an important factor in the success of an application, but in an increasingly digital-only world, its importance is only increasing.
If all goes perfectly, the user doesn’t think about what’s going on behind the scenes. When a button does what it’s supposed to when it’s clicked on, the user goes about their day, possibly even completing a transaction on the site. But when that button takes several seconds to do anything — or, worst-case scenario, never actually does anything — the user will be frustrated, and perhaps jump to a competing site.
According to Eran Bachar, senior product management lead at software company Micro Focus, UI testing is the part of the process that tests for usability, which encompasses making sure that customers who are going to use the application understand the different flows. Another element of UI testing, especially in the web and mobile space, is testing user experience. “If you’re waiting more than two seconds for a specific operation to be completed, that can be a problem, definitely. The moment when you click on a button, when you click on an icon, when you click on any element, you should have a very, very fast responsive kind of an action,” Bachar said.
Clinton Sprauve, director of product marketing at testing company Tricentis added: “The goal of UI testing is to understand all aspects from a UI perspective on the business process side, but also on the technical side to make sure there aren’t any things that are wrong from a functional testing point.”
UI testing may be the tip of Mike Cohn’s popular Testing Pyramid — which is a triangle that describes how long to spend doing each phase of testing — but that doesn’t mean UI testing can be ignored and just slapped onto the end of a testing cycle.
“I believe UI testing is too often an afterthought during product development, and is often placed much too late in the development life cycle,” Jon Edvald, CEO and co-founder of automation platform Garden. “It can be a serious drag on productivity and release cycles to do UI testing — and in fact any type of testing — too late in the process. The later an issue is discovered, the more costly the feedback loop becomes, with all the context switching involved and the terribly slow iteration speeds.”
Google’s Web Vitals
One measurement that can be used to quantitatively measure user experience is Google’s Web Vitals. According to Guillermo Rauch, CEO of front-end web development company Vercel, these Web Vitals were the first metrics created that focused entirely on user experience. One of the Web Vitals is Largest Contentful Paint (LCP), which measures “how fast the meaningful part of the page took to load,” he explained. Rauch pointed out that everyone has likely visited a website where it looks like everything had loaded, but the content is still being loaded, so images, videos, or sometimes text might show up five seconds after everything else.
“So this Largest Contentful Paint metric allows us to say how long did it take for us to load what the user is actually interested in? So when you talk about a visual storefront, it’s not just text, it’s also the picture of the coat that you want to buy,” Rauch explained.
First Input Delay is another Web Vital that measures how long it takes from the user pressing a button, for example, to the site reacting. “If I tap on ‘buy’ is it reacting immediately? We take that for granted, but we’ve all been to websites where we tap and it doesn’t do anything so we kind of intuitively tap again. But a big percentage of users don’t tap again. So they just leave the website. We’re now starting to measure these user experience metrics very diligently,” Rauch said.
Another reason to care about these Web Vitals is that they’re not just used by development teams to measure how happy their users are, but Google can use them to determine search engine rank, Rauch said. In other words, a website that has poor Web Vitals may rank lower, even if they’ve done the proper search engine optimization (SEO) for their site.
Get key stakeholders involved in the process
Involving product users in UI testing is an important part of the process for companies developing products. At Sorenson, which develops communications products that serve the Deaf community, QA engineering manager Mark Grossinger, who is Deaf, explained that this is an important part of his team’s testing process. His team is constantly reevaluating the needs of its users so that it knows what tools and features to provide, and UI testing is an important step in the process.
In order to do their UI testing, Sorenson works closely with the Deaf community, as well as other Deaf employees within the company. It’s important for them to do this because as Grossinger describes, “if you have a hearing individual who does not know what a Deaf person needs or how they would use a product, how would they develop it?”
One example of where that feedback loop made a big contribution was in developing video mail, which Sorenson calls SignMail. “As a hearing person, you can leave a voicemail if your recipient doesn’t answer your call,” Grossinger said. “In the past, there was no equivalent for the Deaf community, so we developed that feature. Now, if a Deaf person is unable to answer a call, the interpreter (or a Deaf caller) can leave a video mail (in American Sign Language), which gives the Deaf person a functionally equivalent message.”
Another example that Grossinger noted was developing the Group Call feature. He explained that hearing individuals can utilize conference calls if they want to talk with multiple people, so Sorenson used feedback from its customers to create an equivalent feature.
While Sorenson specifically serves the Deaf community, Grossinger noted that the need to involve the people who would be using the product is key no matter who the user base is. “When you’re developing software, you need to have people within whatever community you’re serving, whether it’s accessibility or something else,” Grossinger said. “You need to get those people to give authentic feedback. If you don’t, you could end up developing software or an app that doesn’t meet the intended user’s needs. So, I think that from the beginning of any project, when you’re at the drawing board and beginning the innovation process, you need to talk about the stakeholders and the features they need.”
According to Grossinger, diversity is the key to successful UI testing. This includes, not just a diverse development team, but diversity among all departments of the company, such as sales and marketing.
“Sometimes on a smaller team you notice that details can get missed without having that diversity,” said Grossinger. “Diversity also means thinking about various demographics: older populations, younger populations, socioeconomic status, education levels, people who are savvy with technology and those who are not. All those perspectives need to be included in the project development.”
Manual vs. automated UI testing
While automated testing has been a big buzzword around the industry, it hasn’t quite made its way fully into UI testing yet.
“It’s unfortunately quite common for most or even all UI testing to be manual, i.e. performed by developers and/or testers, essentially clicking or tapping around a UI, often following predefined procedures that need to be performed any time a change has been made,” said Garden’s Edvald. “This may be okay-ish for small and simple applications but can quickly balloon into a massive problem when the UI gets more complex and has a lot of variations.”
Edvald described his experience witnessing this firsthand when doing development for the menstrual period tracking app Clue. According to Edvald, it was important to the company to have the app available to as many different users as possible, so it is supported on a variety of devices, from “ancient Android phones to the latest iPhones, tons of different screen sizes and operating system versions, many different languages etc.”
Having all of these different devices, with varying screen sizes and operating systems, led to massive complexity when it came to testing. Manual testing wasn’t going to be possible because a QA team wouldn’t be able to manually test every possible combination at the speed with which the app was being developed and released. To solve the problem, they hired a quality engineer and put more effort into automation.
“Being able to programmatically run a large number of tests is critical when an application reaches some level of complexity and variation,” said Edvald. “Even for less extreme scenarios, say web applications that have fewer dimensions to test, the cost-benefit quickly starts to tip in favor of greater automation. It’s usually a good idea to also do some level of manual, exploratory testing, but it’s possible to greatly reduce the surface area to cover, and thus the time and resources involved.”
While automating UI testing is the ideal, it’s not always practical for every organization out there. For many organizations, there just isn’t the expertise to do all of the automation, and do it properly.
According to Tricentis’ Sprauve, most companies still rely on manual testing for that reason. For example, they will have QA testers sitting in front of a computer and manually performing test steps. “One of the issues with UI automation in some instances is what’s known as flaky tests,” Sprauve said. “That could either be because of the type of the tool that you’re using, or it’s because of a lack of skills of how you build that automation. So one of the biggest challenges is how do I build consistent and resilient automation that won’t break when there are changes or if there are changes, how quickly can we recover to make sure that we get those items addressed so that we’re not just spending time making sure our automation is working versus actually testing. So that’s one of the biggest challenges that an organization faces.”
Michael O’Rourke, senior product manager at Micro Focus, noted that most customers average around 30% to 40% automation, and they only have a few customers that are more than 80% automated. “It takes a lot of work to be able to get to that and it involves a lot of different types of transformations that need to be incorporated in there,” said O’Rourke. “Those customers that put more emphasis on automation are generally the ones that are going, but when it comes to different time constraints, some customers find it easier to be able to only automate certain tests, but then still leave some manual processes behind, which is why it generally ends up roughly around 40%.”
Other than the challenges with complexity and automation, another common challenge for QA teams is maintenance. According to O’Rourke, sometimes a developer may make a change to the UI and then when it gets sent back to the QA person, the test fails. “And they’ll spend a lot of time trying to troubleshoot to figure out what happens because the UI change obviously wasn’t properly documented or told to the QA team. That’s where a lot of the challenges come because they have to go back and modify a lot of the scripts and do a lot of maintenance on it anytime it changes,” he said.
This is especially challenging early on in product life cycles when a product is being developed, O’Rourke explained. “This could be a very big problem for a lot of those different customers who have to frequently test continuously over and over again where breaks are and then go back and adjust tests,” he added.
Vercel’s Rauch predicts that going forward, the web will just keep becoming more user personalized and user experience will continue being an important part of QA. “The core Web Vitals are here to stay, they’re great for developers, they’re great for business people to find this alignment, but more important than that, it’s not just those core Web Vitals,” Rauch said. “I think measuring user experience especially in ways that are unique to your product and unique to your channels is also going to be very, very important. This is just again, the name implies, these are vital signs. This is just like measuring your heart rate, but also if you want to have a great fitness performance, you gotta measure a lot of other things, not just your heart rate.”