The software industry keeps expressing it is under immense pressure to keep up with market demand and deliver software faster. Automated testing is an approach that came out to not only help speed up software delivery, but to ensure the software that did come out did what it was supposed to do. For some time automated testing has been great at removing repetitive manual tasks, but the industry is only moving faster and businesses are now looking for ways to do more.
“Rapid change and accelerating application delivery is a topic that used to really be something only technology and Silicon Valley companies talked about. Over just the past few years, it has become something that almost every organization is experiencing,” said Lubos Parobek, vice president of product for the testing company Sauce Labs. “They all feel this need to deliver apps faster.”
RELATED CONTENT: A guide to automated testing tools
This sense of urgency has businesses looking to leverage test automation even further and go beyond just automating repetitive tasks to automating in dynamic environments where everything is constantly changing. “As teams start releasing even weekly, let alone daily or multiple times a day, test automation needs to change. Today test automation means ‘automation of test execution,’ but the creation and maintenance of tests, impact analysis and the decision of which test to run, the setup of environments, the reviewing of results, and the go/no-go decision are all entirely manual and usually ad-hoc,” said Antony Edwards, CTO of the test automation company Eggplant. “The key is that test automation needs to expand beyond the ‘test execution’ boundary and cover all these activities.”
Pushing the limits
Perhaps the biggest drivers for test automation right now are continuous integration, continuous delivery, continuous deployment and DevOps because they are what is pushing organizations to move faster and get software into the hands of their users more quickly, according Rex Black, president of the Rex Black Consulting Services, a hardware and software testing and quality assurance consultancy.
“But the only way for test automation to provide value and to not be seen as a bottleneck is for it to be ‘continuous,’” said Mark Lambert, vice president of products at the automated software testing company Parasoft.
According to Lambert, this happens in two ways. First, the environment has to be available at all times so tests can be executed at anytime and anywhere. Secondly, the tests need to take change into account. “Your testing strategy has to change resistance built into it. Handling change at the UI level is inherently difficult, which is why an effective testing strategy relies on a multi-layer approach. This starts with a solid foundation of fully automated unit tests, validating the granular functionality of the code, backed up with broad coverage of the business logic using API layer testing,” said Lambert. “By focusing on the code and API layers, tests can be automatically refactored leaving a smaller set of the brittle end-to-end UI level tests to manage.”
Part of that strategy also means having to look at testing from a different angle. According to Eggplant’s Edwards, testing has shifted from testing to see if something is right, to testing to see if something is good. “I am seeing more and more companies say, ‘I don’t really care if my product complies with a [specification] or not,’ ” he said. “No one wants to be the guy saying no one is buying our software anymore, and everyone hates it, but at least it complies with the spec.” Instead, testing is shifting from thinking about the requirements to thinking about the user. Does the software increase customer satisfaction, and is it increasing whatever the business metric is you care about?
“If you care about your user experience, if you care about business outcome, you need to be testing the product form the outside in, the way a user does,” Edwards added.
Looking at it from the user’s side involves monitoring performance and the status of a solution in production. While that may not seem like it has anything to do with testing or automation, it’s about creating automated feedback loops and understanding the technical behavior of a product and the business outcome, Edwards explained. For example, he said if you look at the page load speed of all your pages and feed that back into testing, instead of automating tests that say every page has to respond in 2 seconds, you can get more granular and say certain pages need to load faster while other pages can take up to 10 seconds and won’t have a big impact on experience.
“Testing today is too tied to the underlying implementation of the app or website. This creates dependencies between the test and the code that have nothing to do with verification or validation, they are just there because of how we’ve chosen to implement test automation,” Edwards said.
But just because you aren’t necessarily testing something against a specification anymore, doesn’t mean you shouldn’t be testing for quality, according to Thomas Murphy, senior director analyst at the research firm Gartner. Testing today has gone from a calendar event to more of a continuous quality process, he explained.
“There is a fundamental need to be shipping software every day or very frequently, and there is no way that testing can be manual. You don’t have time for that. It needs to be fast,” he said.
Some ways to speed things up is to capture the requirements and create the tests upfront. Two approaches that really drove the need for automating testing are test-driven development (TDD) and behavior-driven development (BDD). TDD is the idea that you are going to write the test first, then write the code to pass that test, according to Sauce Labs’ Parobek. BDD is where you enable people like the business analyst, product manager or product owners to write tests at the same time developers are developing code.
These approaches have helped teams get software out multiple times a day because they don’t have to wait for days to create the tests and get back results, and it enables them to understand if they make a mistake right away, Parobek explained.
However, if a developer is submitting new code or pull requests to the main branch multiple times a day, it can be hard to keep up with TDD and BDD, making automated testing impossible because there aren’t tests already in place for these changes. In addition, it slows down the process because now you have to go in manually to make sure the code that is being submitted doesn’t break any key existing function, according to Sauce Labs’ Parobek.
But Parobek does explain if you write your test correctly and follow best practices, there are ways around this. “As you change your application and as you add new functionality, you do not just create new tests, but you might have to change some existing tests,” he said.
Parobek recommends page object modeling as a best practice. It enables users to create tests in a way that is very easy to change when the behavior of the app is changed, he explained. “It enables you to abstract out and keep in one place changes so when the app does change, you are able to change one file that then changes a variety of test cases for you. You don’t have to go into 100 different test cases and change something 100 times. Rather you just change one file that is abstracted through page objects,” he said.
Another best practice, according to Parobek, is to be smart about locators. Locators enable automated tests to identify different parts of the user interface. A common aspect of locators is IDs. IDs enable tests to identify elements. For example, when an automated test goes in and needs to test a button, if you’ve attached a locator ID to it, the test can recognize the button even if you moved it somewhere else on the page. Other approaches to locators are to use names, CSS selectors, classes, tags links, text and XPath. “Locators are an important part for creating tests that are simpler and easier to maintain,” said Parobek.
In order to successfully use locators, Parobek thinks it is imperative that the development and QA teams collaborate better. “If QA and development are working closely together, it is easy to build apps that make it easier to test versus development not thinking about testability.”
No matter how much you end up being able to automate, Black explained in order to be successful at it, you will still always have to go back to the basics. If you become too aspirational with automation and have too many failed attempts, it can reduce management’s appetitive for wanting to invest. “You need to have a plan. You need to have an architecture,” Black said. “The plan needs to include a business case so you can prove to management it is not just throwing money into a bright shiny object.”
“It’s the boring basics. Attention to the business case. Attention to the architecture. Take it step by step and course correct as you go,” Black added.
The promise of artificial intelligence in automated testing
As artificial intelligence (AI) advances, we are seeing it be implemented in more tools and technologies as a way to improve user experience provide business value. But when it comes to test automation, the promise of AI is more inspirational than operational, RBCS’ Black explained.
“If you go to conferences, you will hear about people wanting to use it, and tool vendors making claims that they are able to deliver on it. But at this point, I have not had a client tell me or show me a successful implementation of test automation that relies on AI in a significant way,” he said. “What is happening now is that tool vendors are sensing that this is going to be the next hot thing and are jumping on that AI train. It is not a realized promise yet.”
When you think about AI, you think about a sentient element figuring things out automatically, according to Gartner’s Murphy, when in reality it tends to be some repeated pattern of learning something to be predictive or learning from past experiences. In order to learn from past experiences, you need a lot of data to feed into your machine learning algorithm. Murphy explained AI is still new and a lot of the test information that companies have today is very fragmented, so when you hear companies talk about AI in regards to test automation it tends to be under-delivering or over-promising.
Vendors that say they are offering an AI-oriented test automation tool are often just performing model-based testing, according to Murphy. Model-based testing is an approach where tests are automatically generated from models. The closest thing we have out there to an AI-based test automation tool are image-based recognition solutions that understand if things are broken, and can show when it happened and where through visual validation, Murphy explained.
However, Black does see AI having potential within the test automation space in the future; he just warns businesses against investing in any technologies too soon. Areas where Black sees the most potential for AI include false positives, and flaky tests.
False positives happen when a test returns a failed result, but it turns out the software is actually working correctly. A human being is able to recognize this when they look further into correcting the result. Black sees AI being used to apply human reasoning and differentiate the correct versus incorrect behavior.
Flaky tests happen when a test fails once, but passes when the test runs again. This unpredictable result is due to the variation of the system architecture, the test architecture, the tool, or the test automation, according to Black. He sees AI being used to handle validation issues like this by bringing a more sophisticated sense of what fit for use means to the testing efforts.
Kevin Surace, CEO of Appvance.ai, sees AI being applied to test automation, but in different levels. Surace said there are 5 levels of AI that can be applied to test automation:
- Scripting/coding
- “Codeless” capture/playback
- Machine learning: self-healing human-created scripts and money bots
- Machine learning: Near full automation with auto-generated smart scripts
- Machine learning full automation: auto-generated smart scripts with validation
When deciding on AI-driven testing, Surace explained the most important qualification is to learn what type of level of AI a vendor is offering. According to Surace, many vendors have offerings at levels one and two, but there are very few vendors that can actually promise levels three and above.
In the future, Parasoft’s Lambert expects humans will just be looking at the results of test automation with the machine actually doing the testing in an autonomous way. But for now, the real value of AI and machine learning will be used to augment human work and spot patterns and relationships in the data in order to guide the creation and execution of tests, he explained.
Still, Black warns to enter AI for test automation with caution. “Organizations that want to try to use AI-based test automation at this point in time should be extremely careful and extremely conservative in how they pilot that and how they roll that out. They need to remember that the tools are going to evolve dramatically over the next decade, and making hard, fast and difficult to change large investments in automation may not be a wise thing in the long term,” he said.
Manual practices remain
Despite the efforts to automate as much as possible, things for the time being will still require a human touch.
According to Rex Black, president of the Rex Black Consulting Services (RBCS), a hardware and software testing and quality assurance consultancy, you can break testing down into two overlapping categories: 1. Verification, where a test makes sure the software works as specified; and 2. Validation tests, where you make sure tests are fit for use. For now, Black believes validation will remain manual because it is very hard to do in an automated fashion. For example, he explained, if you developed a video game, you can’t automate for things like: Is it fun? Is it engaged? Is it sticky? Do people want to come back and keep playing it?
“At this point, automation tools are really about verifying that the software works in some specified way. The test says what is suppose to happen and checks to see if it happens. There is always going to be some validation that will need to be done either by people,” he said.
Lubos Parobek, vice president of product for the testing company Sauce Labs explained that even if we get to a point where everything is automated in the long-term future, you will still always want a business stakeholder to take a final look and do a sanity check that everything works as expected to a human.
“Getting a complete view of customer experience isn’t just about validating user scenarios, doing click-counts and sophisticated ‘image analysis’ to make sure the look and feel is consistent — it’s about making sure the user is engaged and enjoying the experience. This inherently requires human intuition and cannot be fully automated,” added Mark Lambert, vice president of products for automated software testing company Parasoft.
Robotic process automation
Test automation vendors are flocking to this idea of robotic process automation (RPA). RPA is a business process automation approach used to cut costs, reduce errors and speed up processes, so what does this have to do with test automation?
According to Thomas Murphy, senior director analyst at Gartner, RPA and test automation technologies have a high degree of overlap. “Essentially both are designed to replicate a human user performing a sequence of steps.”
Anthony Edwards, CTO of the test automation company Eggplant, explained that on a technical level, test automation is about automating user journeys across an app and verifying that what is supposed to happen, happens. RPA aims to do just that. “So at a technical level they are actually the exact same thing, it’s simply the higher level intent and purpose that is different. But if you look at a script that automates a user journey there is no way to tell if it has been created for ‘testing’ or for ‘RPA’ just by looking at it,” said Edwards. “The difference for some people would be that testing focuses on a single application whereas RPA typically works across several systems integrated together.”
Over the next couple of years, Gartner’s Murphy predicts we will see more test automation vendors entering this space as a new way to capitalize on market opportunity. “By moving into the RPA market, they are expanding their footprint and audience of people they go after to help them,” he said.
This move is especially important as more businesses move toward open-source technologies for their testing solutions.
Rex Black, president of the Rex Black Consulting Services (RBCS), a hardware and software testing and quality assurance consultancy, sees the test automation space moving towards open source because of cost. “It’s easier to get approval for a test automation project if there isn’t a significant up-front investment in a tool purchase, especially if the test automation project is seen as risky. Related to that aspect of risk is that so many open-source test automation tools have been successful over recent years, so the perceived risk of going with an open-source tool is lower than it used to be,” he said.