The debate on test automation has always been fascinating to me. Huge supporters and practitioners of exploratory testing and other manual testing efforts have often been quick to voice their skepticism of those who touted automation as a golden ticket to “better” testing results. Those on the other side of the fence have wondered how you could get by without automation, given today’s customer demands for faster, faster and faster software delivery.

I stand somewhere in the middle of this argument. I love manual, exploratory testing and am in awe of the creativity, outside-the-box thinking and invaluable feedback that some of the masters of this practice are able to provide their organizations. On the other hand, I’m equally impressed by the innovative technology and reduced barriers to entry that some of today’s automation tools offer to testers from the first day of their use.

(Related: Automated testing’s place in a Continuous Delivery world)

But where I tend to give the skeptics a point in this debate is not in their questioning of why people do automation, but in demanding to know exactly what automation is giving you that manual testing does not—a completely valid question that should absolutely be answerable by automation proponents.

Based on the number of testers who struggled to answer this question when asked by various speakers at a recent industry conference, it’s obvious that some confident, provable and financially positive answers are needed. Not being able to answer a friendly conference speaker is one thing; not having an answer when your manager or another higher-up asks is another.

Many stories were told by consultants of coming across testing teams who were “doing” automation simply because they’d been led to believe that they had to because of its current popularity, or because their competition was using this or that tool. As is true with any new technology, these may be fine reasons to begin to evaluate if an automation tool would benefit your own particular operation, but it’s hardly reason enough to purchase something, disrupt your previous teams’ efforts, and then wait to see if it delivers.

At STARWEST last year, speaker Jim Trentadue’s session, “When Automation Fails—In Theory and Practice,” began by asking who in the room was utilizing some form of test automation. Nearly every hand went up, though some added they were only doing “a little bit” and were looking for help in deciding what else could be automated, while being wary of automating themselves out of a job (which really shouldn’t be feared at all).

The room’s uncertainty about what to automate and what to leave alone led right into one of Trentadue’s primary reasons for choosing the topic. He quickly pointed out that one of the true failures of testing automation is in the assumption that automation will serve all testing needs and/or requirements, or that it must cover 100% of an application to be judged as a success. His statement that automation’s primary utilization is “to take the scripting and programming out of the equation so that more manual testing can be done” seemed to draw a collective sigh of relief from most of the session’s attendees.

So where is this excess scripting and programming that Trentadue refers to? I would suggest that it could be found anywhere manual efforts fail to add to the overall quality of your product.

Take a look at code reviews for instance. Can they be automated? Sure. Should they be automated? Not entirely, by any means. The gains in quality from the collaborative, knowledge-sharing, even training nature of manual code reviews cannot be automated with any tool.

Another area where automation will never fully replace manual testing is in user experience. Employing automated tests for functionality and responsiveness is a great idea, but when your end user’s experience with your product is the ultimate measure of real quality, this is where automated testing tools cannot promise full coverage.

So if full-bore automation isn’t the measure of success, how do you know you’ve automated enough, and where will you look for the greatest ROI for your investment and efforts?

Test architect Martin Pol suggested that we ask our teams, “What are we optimizing? Are we looking to improve the quality of our testing, the quality of our overall product, or perhaps even our entire organization?”

I found it fascinating that Pol not only spoke to “quality” repeatedly throughout his session, I don’t believe he uttered the words “faster” or “speed” a single time either. If you were to ask those in the software industry whether test automation was primarily expected to reduce testing time or to increase the quality of a product, I’m willing to bet that most would assume that speed is the goal. But I think this is where most of us have been mistaken.

A recent poll of testers showed that while nearly all who participated responded that they’re utilizing some level of automation, almost all of them stated that they “want more from it.” I’m curious as to what more they want, because perhaps automation is delivering exactly what it’s capable of, and that the “more” has to come from the testers themselves.

Remember, automation frees up the time to perform more manual testing, especially on newer, high-value software features that benefit customers. Manual testing is time-consuming—there’s no arguing that. But in regards to the overall quality of your software, there’s really nothing like it, and automation should never be sold as (or expected to be) its replacement.

As a tester, your job is safe, but you’ve got some work to do.