adam kolawaThe need to replace constrained resources in development/test environments has entered the spotlight recently, but it’s been bubbling under the surface for some time now.

The embedded development market is the first place where resolving resource constraints emerged as a vital aspect of testing. Here, the ultimate goal for time- and cost-efficient product delivery is to develop the software in parallel with the hardware. In order to thoroughly test the software before the hardware is built and available for testing, something has to stand in for the hardware. Otherwise, it’s impossible to validate application paths that involve that hardware (and since hardware and software are so closely interlinked in this market, this results in a significant number of use cases that cannot be exercised).

To get around this, engineers commonly mimic the behavior of the unavailable hardware—either by using a simulator or by using stubs to replace the parts of the hardware that interact with the software.

Staging systems for Web development were another precursor. As I wrote in “Bulletproofing Web Applications” back in 2001, Web developers needed to create a “sandbox” where they could deploy and test their latest application changes without impacting the live application.

Essentially, the latest version of the component being developed or modified could then be run alongside “staging” versions of other system components (databases, app servers, etc.) in order to verify if the new application functionality worked as expected. As I recommended back then, the staging environment did not have to exactly replicate the full production environment. It could simply replace the bare minimum of functionality needed to thoroughly exercise the operations that the development and QA team needed to exercise.

Today, working around resource constraints is more important than ever since most enterprise systems are distributed heterogeneous systems whose many components are developed, deployed and evolving beyond your immediate control. You’ve got to overcome such constraints as:
• Missing/unstable components
• Evolving development environments
• Inaccessible partner systems and services
• Systems that are too complex for test labs
• Internal and external resources with multiple “owners”

The 80/20 rule of thumb provides an interesting perspective on this situation. The part of the system that lies outside of your control is typically around 20%. However, this 20% usually comprises core functionality, so the inability to access this 20% can impact nearly 80% of the use cases that are essential for exercising the application under test. This significantly impedes your ability to validate the system and uncover functionality, reliability and performance problems before users do.

One way around this is to use hardware or OS virtualization technology to completely mirror 100% of those missing or evolving components. However, even when components are virtualized, there is still significant overhead involved: You still need to manage and maintain all the appropriate configurations and data. For large mainframe applications, third-party applications or ERPs, the costs of doing this often outweigh the benefits.

A more focused and efficient strategy is to use what I call “Application Behavior Virtualization.” What this means is that instead of trying to replace the whole dependent component—the entire database, the entire third-party application, and so forth—you just replace the component behavior that is directly related to the application under test.

For instance, instead of virtualizing an entire database, you monitor how the application interacts with the database, then you virtualize the related database behavior (the SQL queries that are passed to the database, the corresponding result sets that are returned, and so forth).

The 80/20 rule applies here too. Remember the 80% of application use cases that tend to interact with dependent resources? You typically need to virtualize only 20% (or less) of each dependent resource in order to fully exercise all those otherwise-impacted use cases.

The end result? By virtualizing just a sliver of the dependent components’ behavior, you gain the freedom to test 100% of your application’s use cases whenever you want, from wherever you want. You increase the scope of your testing with minimal effort and significantly reduced costs. It’s virtually impossible to find a faster and easier way to optimize testing resources.

(Editor’s note: This Guest View was written shortly before Adam Kolawa passed away in April 2011.)

Adam Kolawa (1957-2011) was founder and CEO of Parasoft, which sells tools to improve software quality across the development life cycle.

Kolawa was the author of “Bulletproofing Web Applications” (2001), “Automated Defect Prevention: Best Practices in Software Management” (2007) and “The Next Leap in Productivity: What Top Managers Really Need to Know about Information Technology” (2009).