With the emphasis on ever-faster software release cycles, organizations are turning to automated testing to ensure they can keep up with that speed while simultaneously ensuring they are releasing quality products.

Though people have been talking about automated testing for a while now, many testing efforts are still manual, said Jeff Scheaffer, general manager of continuous delivery at CA Technologies. According to Scheaffer, the most recent estimates are that 70 percent of testing is still done manually.

Mark Lambert, vice president of products at Parasoft, explained that automated testing is not just about having the right tooling, but about automating regression testing as well, which is what allows organizations to deliver quality at speed.

A guide to automated testing tools

Without automation, the overhead of tests “grinds delivery to a halt” and becomes a bottleneck in the process, Lambert explained.

“There is a fundamental need for testing to not only no longer be a bottleneck to innovation, but to able to keep pace with development, to reduce maintenance overhead, and to be able to optimize your test suite to reduce bloat,” said Wayne Ariola, chief marketing officer at Tricentis. “The ‘right’ testing tools cannot fail to deliver on these requirements for quality at speed.”

Organizations need to seamlessly integrate testing into the software delivery pipeline so it can become a bridge between development and delivery, said Lambert of Parasoft. “Without that bridge your development gets done, but then you’re not able to validate the quality and do the delivery and push it out the door.”

According to Scheaffer, even though many organizations are still doing manual testing, they will soon be forced to implement automation into their software development life cycles in order to stay competitive in their industry. “One only needs to look at other industries to see what happens to those who don’t embrace automation,” Scheaffer said. “Nearly every major industry relies on automated factories to produce goods. There is no reason to think the production and testing of software should be any different. Those who don’t embrace automation will not be able to keep pace with competitors due to the cost, speed and quality benefits of automation.”

There are two shifts that testing tools should accommodate, according to Tricentis’ Ariola. The first is that assessing the risk for a release candidate should be the primary objective of test case design and execution. The second is that testing tools should take the input of various stakeholders into account during each stage of the release cycle.

“Not all test information is relevant at the same time,” Ariola explained. “It must be delivered when stakeholders can take action on it—and those actions must be measurable.”

Ariola said that there are three major components of choosing the right testing solution. First, organizations need to fully be on board with automation. He stressed that while manual efforts can complement automation, to achieve the consistent release cadence organizations need these days, automation is a necessity.

Second, organizations need to anticipate the demands of tomorrow to avoid disruption. “These organizations must also come to the realization that legacy tools were not built to meet today’s demands of Agile and DevOps, much less tomorrow’s,” Ariola said.

Finally, testing should be able to keep up with the pace of development, reduce maintenance overhead, and optimize the testing suite in order to reduce bloat, Ariola explained.

Ariola also believes that removing the need for scripting can be a useful feature for automated testing tools. “Accelerating testing to where it can keep pace with development requires features that remove arguably the biggest pain point in legacy testing solutions—the requirement of scripting,” he said. “A model-based solution eliminates the need for coding expertise, while also reducing maintenance overhead that prevents organizations from reaching automation rates they never thought were possible—90 percent or greater.”

Ariola also said that organizations should look for a testing platform that will give them access to the entire suite of development technologies used, and allow them to access everything through a single interface. Having this sort of end-to-end platform will enable organizations to update testing in real-time and in parallel with the evolution of the applications, he explained.

Gil Sever, co-founder and CEO of Applitools, said it’s important to pick a tool that can integrate with your existing tools and provide robust reporting capabilities. Your organization should also consider whether or not it will be able to integrate with other tools in the delivery process, such as continuous integration tools, like Jenkins, or collaboration tools, like Slack.

On the reporting side, the tool should be able to provide a visual representation of test results. It is also important to show which tests have passed, which tests have failed, and what the status of the application is after running the tests, Sever explained.

For example, Applitools has a dashboard that shows what percentage of tests passed or failed and if they failed on specific browsers or specific devices. This allows users to isolate the root cause or source of the problem and quickly fix it, Sever said.

Robust reporting capabilities also give organizations a better idea of how far away they are from being able to release a reliable application, Sever noted.

Parasoft’s Lambert stressed that organizations should not be looking specifically for certain features, but rather looking at their current and future needs, and finding a solution that can satisfy both of those needs.

It all comes down to the total cost of ownership, Lambert said. “What might be quick and cheap now may not be maintainable or scalable in the future,” he explained. Organizations need to find a balance between satisfying the needs they currently have, but also thinking about how that tool can scale in the future to suit future needs and requirements, he said.

CA’s Scheaffer said he believes that in order to find the right automated testing tools, organizations should start by identifying a business driver. “Everyone wants to deliver quality software faster, with reduced risk and at lower costs, but usually an organization will have a specific, acute need that is leading them to evaluate testing solutions,” he said.

Once that driving factor is identified, organizations need to see where the gaps in their existing tools or processes are preventing them from achieving that objective, Scheaffer said.

“From there, it’s easy to identify which testing tools are right for your organization,” Scheaffer said. “If your biggest challenge is managing test data, the logical place to start is with a test data management solution. If mapping requirements to test cases is a challenge, you should look at solutions that offer model-based test case design. Organizations that have an issue accessing test environments can look at service virtualization solutions as a first step. And so on.”

Service virtualization gives you control over the dependencies in an application, so tests that would normally be created and executed at the end of the cycle can be executed earlier in the process, Lambert noted.

Hurdles to automated testing
Organizations face many challenges when it comes into implementing automated testing.
Among them are:

  • Automated testing requires a cultural shift. According to CA Technologies’ Jeff Scheaffer, even though the benefits of automated testing have been proven, many individuals or teams have difficulty with giving up the control that is associated with manual testing. He said that it is important to strike a balance between automation and human control and checkpoints.
  • “The maintenance trap.” Ariola of Tricentis referred to as “the maintenance trap.” Organizations get started with automation and have all of the tests up and running, but over time their environment changes and their applications change, and suddenly their automated tooling becomes unstable, explained Wayne Ariola of Tricentis. When this happens, teams need to spend more time on maintenance rather than defining the proper test scenarios, he noted. To overcome this, organizations will have to eliminate the need for scripting, such as by implementing model-based testing, Scheaffer said.
  • Having the right stateful data at hand. Ariola said once that has been solved, the next challenge becomes making suring that the right systems and services are in place to be able to perform tests. “These two challenges require test data management that overcomes tough demands in terms of time-dynamics, data fluctuation, and consumption, and orchestrated service virtualization that helps you stabilize access to dependent systems,” Ariola said.
  • Skill and manpower. “In organizations that use manual testing, when they want to move to automation, it’s not just buying the tools,” said Applitools CEO Gil Sever. Organizations buy the new tool, but then they also probably need to hire a software engineer or software developer to build and maintain the scripts. Sever cites this need for resources as the main reason why not everyone has fully embraced automation. Companies can stick with manual testing and get away with cheaper manpower to do manual quality assurance, and some of them do, Sever explained. “I think visual validation joins that trend as well because instead of writing 50 or 100 lines of code to validate all the different fields, and all the different numbers, and all the different images that appear on the screen, with visual testing you actually tell the tool: validate that this image is correct and this screen is correct,” Sever said. “In the best case scenario, you only need to write one line of code. We actually take the screenshot, analyze it, and give you the result without needing you to write any specific code for it.”
  • Scaling test practices with skills and knowledge. “You need to find something that’s easy to use, and can help translate the technical details of what you’re trying to implement into something that’s consumable by non-experts,” Parasoft’s Mark Lambert.

All of that presents the problem of maintaining a stable test automation suite, Lambert explained. He said that when organizations build a large suite of test cases, it can be difficult to not only identify the impact of changes, but to then take those changes into account and be able to refactor the test suite accordingly, he said.

RELATED CONTENT: How these companies can help you automate your testing process

“Being able to have tooling that can help you with that impact assessment and with the refactoring will help streamline the maintenance of the practice so that you’re not spending all of your time rebuilding test automation,” Lambert said. “You’re able to just kind of rely on the test automation working as part of a regression and focus on expanding the overall test coverage on new functionality.”

According to Scheaffer, enabling continuous testing is the ultimate goal of test automation. “Rather than testing being an event that happens at a point in time (or in the SDLC), testing is done constantly,” he said. “With continuous testing, test cases are generated automatically from the business requirements, and then test cases are executed automatically as the code is being written. This results in tests that are both more comprehensive, and more efficient—enabling quality software to be delivered faster, at lower costs.”

Lambert said he believes the future of test automation lies with the advance of artificial intelligence and machine learning. “Artificial intelligence and machine learning work better the more data you give it,” he said. “What I see is AI and machine learning really becoming powerful when it’s trained. So that training can take the form of human interactions within an application and an engine watching to see what the human does, and then trying to variate on that. It can take the form of a human providing some kind of tests and saying this test validates some functionality and using that to train the engine. That’s the direction that I see tests going.”