Enterprises want to deliver software fast in order to keep up with market demands and stay competitive, but at the end of the day it doesn’t matter how fast they deliver software if it’s not pleasing to the end user.
“Users are the ones who are going to be interacting with your application, so you want to make sure they get the best and correct experience they are looking for,” said Max Saperstone, director of software test automation at consulting company Coveros.
The way to do this is to perform UI testing, which ensures an application is performing the right way. “The make or break of an application is with the user’s experience within the UI. It’s more critical than ever for the UI portion of the application to be functional and behave as expected,” said Ashish Mathur, director and architect of testing products at HCL Software, a division of HCL Technologies.
One way of doing this is manually testing all the ways and scenarios users will be interacting with the application, although this can be timely and costly. The alternative to manual testing is automated testing, which automatically executes tests for testers so they can focus their time and effort in other areas.
In today’s digital age, Mark Lambert, vice president of products at the automated software testing company Parasoft, explained organizations have to rely on automated testing in order to reduce the costs and efforts. “Organizations realize manual testing efforts don’t scale to the pace of delivery, so there’s an increased reliance on test automation to help with validating the increased cadence of delivery,” he said.
But, that doesn’t mean manual testing becomes obsolete. Lambert added that a successful UI testing strategy actually includes both manual and automated testing techniques.
“When you think about UI testing, you want to think about what to automate and what to do manually,” he said. “Humans are very, very good at understanding if something feels right. They are very good at randomly exploring intuitively different execution paths and doing negative testing. What humans are not very good at are repetitive tasks. We get bored very quickly and if we are doing the same thing over and over again, we can often miss the details.”
However, manual and automated testing is not enough to properly validate the UI.
Solving UI testing problems
Despite the fact that test automation is supposed to help speed things up, organizations still struggle to achieve high levels of test automation and run test automation continuously, Parasoft’s Lambert said. Once organizations get started with test automation, the number one problem they run into is maintenance as part of the ongoing effort. That is because things are constantly changing, making the test environment very complex. Lambert explained if testers don’t have reliable ways of locating elements in the page or handling wait conditions, it can put them at risk and costs days in maintenance.
“As the application changes, the coverage also needs to change. The new areas of how the application behaves and flows need to be accommodated for that change,” HCL’s Mathur explained.
In order to overcome this, Lambert suggested adopting the Page Object Model, which promotes reuse across test scripts, resulting in more maintainable tests. “When the UI changes, you only have to change in one place and not in two, 200, 2,000 or whatever the number of tests you have that are touched by that UI change,” he said.
New artificial intelligence-based tools are beginning to come out to also capitalize on that pain point, make it easier to recognize changes and automatically suggest or make updates to tests so they can still run, Coveros’ Saperstone explained. For instance, the Parasoft Selenic solution injects AI into the UI testing process to help analyze tests, understand where problems and instabilities are, and apply self-healing capabilities to the CI/CD pipeline so it doesn’t break due to failing builds. It also provides recommendations to the tester to improve test automation initiatives going forward.
Artificial intelligence should also be able to help testers identify new test cases that need to be created, and assist beyond just maintaining what’s already there, according to Chris Haggan, HCL OneTest UI product manager.
Other ways in which AI is starting to be applied to UI testing is in understanding what actual users do and in going through those workflows. In addition, Mathur explained that modern-day applications are becoming much more graphical, and the old ways of locating a piece of text and interacting with it don’t work anymore. This is where he believes machine learning will really thrive, in being able to help understand the context of the page, compare it to what is already known about the application, and what is changing within the application. “This will lead to much more robust and much more reliable test cases than we ever had in this space. The incorporation of machine learning will make testing a lot easier,” he said.
However Saperstone doesn’t see this taking off for the next couple of years. Testers are still working on trusting AI and AI still has some maturing to do, he said.
“People are still stuck in the old manual testing mindset. They think they can take a manual test and convert it into an automated script to get the same coverage and results, and that’s not how that works,” said Saperstone. “You need to think about what you are trying to accomplish, verify and understand.”
The way teams are building user interfaces are also changing, Haggan said. There is a move towards modern cloud-native applications and technologies like microservices. So instead of having UI built in one big monolith, they are now being developed and delivered in parts and pieces. The task of a UI testing organization like HCL is to go in and figure out how the pieces fit together, determine if all the pieces work together, and find out if it is seamless, according to Haggan.
Aside from using tools, testers need to leverage smart execution, Lambert added. “Analyze the app and tests running against the app to determine what has changed and what tests need to be re-executed to those changes so you’re only executing tests and validating the changes in the app,” he said. This is extremely important because UI testing is slow; it takes time. There are many browsers involved and click paths. When you are testing, you are not talking about two or three automation tests or even two or three hundred. You are talking about thousands of automated tests that are running. Being able to only target the necessary changes can significantly cut down the amount of time and tests, he pointed out.
The slowness of UI testing also becomes a problem in a Agile or DevOps environment where there are frequent releases and builds happening. Testing needs to be done at a fast rate in order for it to be useful, according to Mathur. He recommended using distribution technology, cloud technologies and containers to speed things up. “Adopting technologies like Docker to the test cases and using agents so you can run them in parallel and get all the results in one place and get an answer fast as to the state of the application is increasingly important as everyone tries to move towards Agile development,” he said.
Saperstone suggested seeing if there are things you can verify at lower levels, ways to break test times, and any tests that can be run in parallel.
“For example, I have a standard web app that has a back end, some APIs associated with it and then a UI. If I need a new user to log into the system, rather than creating a new user through the UI for each and every test, I can use the APIs in order to generate the users. I can create the user through some sort of back-end API call and then automatically log in. That is going to save me some time,” he said.
Haggan added that it is important to bring the API and back-end solutions together with the UI testing because applications are becoming more and more reliant on those, especially with the use of microservices. “It is important that your UI test also has an ability to be able to do some of that API validation in the back end and bring those two parts of the universe together so when you submit a piece of data, you can tell if it is updating the right microservices, the right databases, and what is happening in the back end,” he said.
Mathur also said that unit testing overlaps with UI testing. “If we have units that are outside the development scope of the application itself, there is a good need for unit testing to also be incorporated in UI testing to give a leg up for the functional tests to get started and build on top of that,” he said.
Parasoft’s Lambert turns to the testing pyramid, which groups tests into different granularities and gives an idea of how many tests you need in those groups. “What it does is it talks about how to organize your test automation strategy. You have a lot of tests at the lower level of the development stack, so your unit tests and API tests should cover as much as possible there. UI tests are difficult to automate, maintain and get the environment set up. The testing pyramid minimizes this,” he said. “I’m a very big proponent of the testing pyramid, which says a foundation of unit tests backed up by API or service level tasks, UI tests, and both automation and manual testing make manual testing much more efficient and much more effective and much more valuable. That is how you can really have a great strategy that’ll help you accelerate your delivery process,” he said.
Other best practices for UI testing include modularization, behavior driven development, and service virtualization, the thought leaders added.
“When you hear companies say automation isn’t working for us, it is mainly because they are not really doing automation the right way,” said Saperstone.