Typically, when we think of new ways to test software, we focus on tools and methodologies. During the last year, I have discussed various approaches to testing. I’ve tried to illuminate new approaches that are effective and might inspire improvements, or at least experimentation, in testing organizations that are battling the testing gap—that gulf between the testing necessary to instill confidence in the software, and the testing actually done.

Some companies are using crowdsourcing to reduce this gap, leveraging the crowds to do things that would otherwise be nigh well impossible. Large test groups, of course, have a long history in software development. Release candidates of operating systems—notably those from Microsoft—are a way of achieving a large test sample that can exercise the software in ways that simply cannot be duplicated inside the firewall.

Most companies don’t have the luxury of having thousands of beta testers to run their products and provide feedback. As a result, they depend on a core of dedicated customers or enthusiasts who form a test group with significant limitations. Most prominent of these is that they tend to know the product well and so cannot provide the feedback of a new customer—the one every company must please if it is to grow.

Into this breach recently stepped uTest, a startup based in Massachusetts. The founders’ unique vision was to engage a worldwide community of testers who would be paid for finding defects.

Here is how the model works: There are roughly 23,000 individuals signed up to be part of a testing network. They are distributed around the world (roughly one third each in North America, India and the rest of the world). They are paid only for accepted defects. Their credentials as testers are not based on their professional experience, but by a rating system based in large part on customer feedback on the testers’ bug reports.

Customers with products to test come to the company and, together with a service rep, design a test plan for their product. They choose testers based on the technologies they have at their disposal, on their geographic location (if it’s relevant), and finally on the ratings the testers have earned. A plan is then put together, with, say, 100 testers, and within days the company starts receiving bug reports.

The reports are filtered by uTest to strip out bugs the company is not interested in (e.g., we don’t want any testing of the printing functions because they’re currently being rewritten). The customer runs the test cycle for as long as it wants. Says uTest’s Matt Johnson: “It’s a form of test-on-demand.”

A key concern in this arrangement might be confidentiality. uTest has signed agreements with all testers, and it encourages customers with sensitive products to get signed NDAs from the testers it uses. In addition, it supports the use of watermarked software so that leakers can be identified and removed from the network. As Johnson notes, the prospect of losing access to the revenue stream generated by testing at home on your own schedule generally is motivation enough for its contributors to respect NDAs.