Typically, when we think of new ways to test software, we focus on tools and methodologies. During the last year, I have discussed various approaches to testing. I’ve tried to illuminate new approaches that are effective and might inspire improvements, or at least experimentation, in testing organizations that are battling the testing gap—that gulf between the testing necessary to instill confidence in the software, and the testing actually done.

Some companies are using crowdsourcing to reduce this gap, leveraging the crowds to do things that would otherwise be nigh well impossible. Large test groups, of course, have a long history in software development. Release candidates of operating systems—notably those from Microsoft—are a way of achieving a large test sample that can exercise the software in ways that simply cannot be duplicated inside the firewall.

Most companies don’t have the luxury of having thousands of beta testers to run their products and provide feedback. As a result, they depend on a core of dedicated customers or enthusiasts who form a test group with significant limitations. Most prominent of these is that they tend to know the product well and so cannot provide the feedback of a new customer—the one every company must please if it is to grow.

Into this breach recently stepped uTest, a startup based in Massachusetts. The founders’ unique vision was to engage a worldwide community of testers who would be paid for finding defects.

Here is how the model works: There are roughly 23,000 individuals signed up to be part of a testing network. They are distributed around the world (roughly one third each in North America, India and the rest of the world). They are paid only for accepted defects. Their credentials as testers are not based on their professional experience, but by a rating system based in large part on customer feedback on the testers’ bug reports.

Customers with products to test come to the company and, together with a service rep, design a test plan for their product. They choose testers based on the technologies they have at their disposal, on their geographic location (if it’s relevant), and finally on the ratings the testers have earned. A plan is then put together, with, say, 100 testers, and within days the company starts receiving bug reports.

The reports are filtered by uTest to strip out bugs the company is not interested in (e.g., we don’t want any testing of the printing functions because they’re currently being rewritten). The customer runs the test cycle for as long as it wants. Says uTest’s Matt Johnson: “It’s a form of test-on-demand.”

A key concern in this arrangement might be confidentiality. uTest has signed agreements with all testers, and it encourages customers with sensitive products to get signed NDAs from the testers it uses. In addition, it supports the use of watermarked software so that leakers can be identified and removed from the network. As Johnson notes, the prospect of losing access to the revenue stream generated by testing at home on your own schedule generally is motivation enough for its contributors to respect NDAs.

There are several typical use cases for this model. The first is geographic testing. Your company’s mobile app has just been translated into Spanish. How are you going to find 150 Latin American testers with Nokia phones to give you feedback?

Another compelling use case is load testing. The company says that often on load-testing projects, the customer already will be running simulated loads at the point in time when uTest’s crews log on to the site. This allows the customer to detect errors and problems that can occur but which are not entirely reproducible with load-testing software.

The most common use case, however, is the one I described earlier: significantly expanding a testing team. This is particularly attractive to ISVs and other creators of consumer software who are always laboring under the fear that their product will be used in ways they can’t anticipate in testing. Via uTest’s approach, they can validate functionality along many possible usage patterns.

Because it’s an on-demand model, initial costs are low. A testing cycle of three runs for a small project costs uTest customers in the US$5,000–$6,000 range—a number that’s well within the budget of most organizations.

Crowdsourcing will surely become a larger part of testing strategies, whether it is the uTest model or the development of tools that facilitate widespread access. Such tools are starting to emerge. For example, Mozilla’s TestSwarm by John Resig (of jQuery fame) helps tests JavaScript against a variety of browser/platform combinations. Other products are sure to follow.

Andrew Binstock is the principal analyst at Pacific Data Works. Read his blog at binstock.blogspot.com.