For over two decades now, software testing tool vendors have been tempting enterprises with the promise of test automation. However, the fact of the matter is that most companies have never been able to achieve the desired business results from their automation initiatives. Recent studies report that test automation rates average around 20% overall, and from 26-30% for agile adopters.

I believe that several factors contribute to these dismal automation results…

Legacy software testing platforms were designed for a different age
The most commonly-used software testing tools today are predicated on old technology, but enterprise architectures have continued to evolve over the years. Development no longer focuses on building client/server desktop applications on quarterly release cycles — with the luxury of month-long testing windows before each release.

Almost everything has changed since test automation tools like those by Mercury, HP, Micro Focus, Segue, Borland, and IBM were developed. Retrofitting new functionality into fundamentally old platforms is not the same as engineering a solution that addresses these needs natively.

Legacy script-based tests are cumbersome to maintain
Scripts are cumbersome to maintain when developers are actively working on the application. The more frequently the application evolves, the more difficult it becomes to keep scripts in sync. Teams often reach the point where it’s faster to create new tests than update the existing ones. This leads to an even more unwieldy test suite that still (eventually) produces a frustrating number of false positives as the application inevitably continues to change. Exacerbating the maintenance challenge is the fact that scripts are as vulnerable to defects as code—and a defect in the script can cause false positives and/or interrupt test execution.

The combination of false positives, script errors, and bloated test suites creates a burden that few QA teams can overcome. It’s a Sisyphean effort — only the boulder keeps growing larger and heavier.

Software architectures have changed
Software architectures have changed dramatically, and the technology mix associated with modern enterprise applications has grown immensely. We’re trying to migrate away from mainframes and client/server as we shift towards cloud-native applications and microservices. This creates two distinct challenges:

  • Testing these technologies requires either a high degree of technical expertise/specialization or a high level of business abstraction that allows the tester to test without diving into the low-level technical details.
  • Different parts of the application are evolving at different speeds, creating a process cadence mismatch.

The software development process has changed
Although most enterprises today still have some waterfall processes, there’s an undeniable trend towards rapid iterations with smaller release scopes. We’ve shifted from quarterly releases to bi weekly or daily ones — with extraordinary outliers like Amazon releasing new code to production every 11.6 seconds. This extreme compression of release cycles wreaks havoc on testing — especially when most testers must wait days or weeks to access suitable test environment and test data.

The responsibility for quality has changed
In response to the desire for faster release cycles, there’s been a push to “shift left” testing. The people creating the code are assuming more responsibility for quality because it’s become imperative for getting to “done done” on time. However, for large enterprises working on complex applications, developer-led testing focuses primarily on a narrow subset of code and components. Developers typically lack both the time and the access required to test realistic end-to-end business transactions. Although the onus for quality has shifted left, the legacy platforms, rooted in waterfall processes, have a distinct bias towards the right. This makes it difficult to blend both approaches.

Open-source testing tools have changed the industry
The rise of open-source software testing tools such as Selenium and SoapUI have had both positive and negative effects. Traditionally, open-source testing tools are laser-focused on solving a very specific problem for a single user. For example, Selenium has become an extremely popular script-based testing tool for testing web interfaces. Yet, although Selenium offers speed and agility, it does not support end-to-end tests across packaged apps, APIs, databases, mobile interfaces, mainframes, etc.. There’s no doubt that most of today’s enterprise applications feature a web UI that must be tested. However, in large enterprises, that web interface is just one of many elements of an end-to-end business process. The same limitation applies to SoapUI and API testing.

So… now what?
Software testing must change. Today’s software testing challenges cannot be solved by yesterday’s ALM tools. With disruptive initiatives like DevOps, Continuous Delivery, and Agile expanding across all industry segments, software testing becomes the centerpiece for data-driven software release decisions. This next wave of SDLC maturity requires organizations to revamp antiquated testing processes and tools. This means that organizations must have technologies that enable Continuous Testing — or innovative ideas will remain hostage to yesterday’s heavyweight testing tools.