The debate on test automation has always been fascinating to me. Huge supporters and practitioners of exploratory testing and other manual testing efforts have often been quick to voice their skepticism of those who touted automation as a golden ticket to “better” testing results. Those on the other side of the fence have wondered how you could get by without automation, given today’s customer demands for faster, faster and faster software delivery.

I stand somewhere in the middle of this argument. I love manual, exploratory testing and am in awe of the creativity, outside-the-box thinking and invaluable feedback that some of the masters of this practice are able to provide their organizations. On the other hand, I’m equally impressed by the innovative technology and reduced barriers to entry that some of today’s automation tools offer to testers from the first day of their use.

(Related: Automated testing’s place in a Continuous Delivery world)

But where I tend to give the skeptics a point in this debate is not in their questioning of why people do automation, but in demanding to know exactly what automation is giving you that manual testing does not—a completely valid question that should absolutely be answerable by automation proponents.

Based on the number of testers who struggled to answer this question when asked by various speakers at a recent industry conference, it’s obvious that some confident, provable and financially positive answers are needed. Not being able to answer a friendly conference speaker is one thing; not having an answer when your manager or another higher-up asks is another.

Many stories were told by consultants of coming across testing teams who were “doing” automation simply because they’d been led to believe that they had to because of its current popularity, or because their competition was using this or that tool. As is true with any new technology, these may be fine reasons to begin to evaluate if an automation tool would benefit your own particular operation, but it’s hardly reason enough to purchase something, disrupt your previous teams’ efforts, and then wait to see if it delivers.

At STARWEST last year, speaker Jim Trentadue’s session, “When Automation Fails—In Theory and Practice,” began by asking who in the room was utilizing some form of test automation. Nearly every hand went up, though some added they were only doing “a little bit” and were looking for help in deciding what else could be automated, while being wary of automating themselves out of a job (which really shouldn’t be feared at all).

The room’s uncertainty about what to automate and what to leave alone led right into one of Trentadue’s primary reasons for choosing the topic. He quickly pointed out that one of the true failures of testing automation is in the assumption that automation will serve all testing needs and/or requirements, or that it must cover 100% of an application to be judged as a success. His statement that automation’s primary utilization is “to take the scripting and programming out of the equation so that more manual testing can be done” seemed to draw a collective sigh of relief from most of the session’s attendees.