“Instead of people talking about wanting to automate tests, about hooking virtualization capabilities into a development tool, you’ll see much more of a hub that can deploy and take advantage of what happens when you put automation and virtualization together,” said HP Software’s Emo. It’ll enable automatic provisioning of virtual services you’ve discovered from your application architecture and make it available for your tester. Once the defect is found, you can automatically roll up that defect combined with a virtual service so your developer has a single environment to work with the next day.”

Automation. Virtualization. The amalgamation of developers and testers in a more fluid, concurrent software development life cycle. They’re all elements in the shift to continuous testing, which if SOASTA’s Tom Lounibos’ vision comes to fruition, may resemble something like “The Matrix.”

“Picture that concept of living in a world that’s actually a computer program, and if we’re in a meeting of 10 people, only two are real and the rest are computer generated,” he said. “That’s how we see testing in the future: a test matrix. There’ll be real people on your website or application, but there will be a constant flow of fake users anticipating problems of the real ones. Imagine virtual users trying to get ahead of real users’ actual experiences. That’s where continuous testing is going.”#!Best practices for continuous testing
As organizations and testing providers transition from manual to continuous testing, a new set of best practices is vital in keeping testing teams on track, optimizing resources and delivering a working application at the speed of agile.

• Daily, targeted testing: Gigantic, exhaustive tests are ineffective. Daily load tests in low volumes of concurrent users can help uncover smaller scaling issues, and targeted sample testing of software on various OSes, devices, carriers and applications is more effective and cheaper than running through thousands of test cases in every single environment.

• Test in production: Rather than testing in a controlled lab setting, testing in production (while real users browse a website or application) gives the most accurate indication of how a piece of software will perform.

• Scale test volume: Break a test suite into smaller chunks of tasks running in parallel to the automated deployment. This makes the code easier to execute and debug without human intervention.

• Diagnose the root cause: A test passing, failing, or producing a critical bug report is less important than finding the root cause of the failure in the code. Testers diagnosing the root cause stop engineers and testers from wasting time and resources tracing symptoms.

• Don’t lose sight of SLAs: Putting service-level agreements on a task board or list of constraints (so that every time a test or build is run, testers know what SLAs the new application, features or functionality have to pass) will keep application quality up while maintaining development speed.

• Nightly and end-of-sprint testing: Continuously integrated builds undergo automated testing whenever a developer pushes code to a repository, but running larger tests at specific times is still valuable. During a nightly build, run a full site or application load test for whatever you expect the user base to be at any given time. Then, toward the end of an iteration or sprint, stress the application to its breaking point to set a new bar for how many concurrent users it can handle.