In the race to implement continuous—or simply faster—delivery, emphasis on the build-and-deploy side has gotten most of the press, while continuous testing has languished, according to technology analyst Theresa Lanowitz. All too often, the first technologies associated with virtualization are VMware, hypervisors, Microsoft Azure or the cloud.

“Part of the problem with service virtualization is that the word ‘virtualization’ has a strong attachment to the data center. Everyone knows the economic benefits of server virtualization,” said Lanowitz, founder of analyst firm Voke, who’s been talking about the topic since 2007. “A lot of people don’t know about service virtualization yet… It’s moved from providing virtual lab environments to being able to virtualize a bank in a box or a utility company in a can so there are no surprises when you go live.”

No time for testing
Testing remains a bottleneck for development teams, or, worse, a luxury. Just ask Frank Jennings, TQM performance director for Comcast. Like many test professionals these days, he faces many scenarios for exercising a diverse array of consumer and internal products and systems.

“The real pain point for my team was staging-environment downtime,” he said in an October 2012 webinar moderated by SD Times editor-in-chief David Rubinstein. “Often, downstream systems were not available, or other people accessing those dependent systems affected test results.”

(Service Virtualization: A webinar from SD Times)

Automating the test portion of the life cycle is often an afterthought, however. “People are looking for operational efficiency around the concepts of continuous release. We walk into companies that say ‘We want to go to continuous release,’ and we ask, ‘What are your biggest barriers?’ It’s testing,” said Wayne Ariola, chief strategy officer for Parasoft, a code quality tool vendor.

“Today, I would say 90% of our industry uses a time-boxed approach to testing, which means that the release deadline doesn’t change, and any testing happens between code complete and deadline. That introduces a significant amount of risk. The benefit of service virtualization is you can get a lot more time to more completely exercise the application and fire chaos at it.”
#!The costs of consumerization
While speed to market tantalizes, the costs of failure are ever greater as software “eats the world” (as Marc Andreessen put it). Even the most mundane of industries is not immune to the powerfully fickle consumer.

Ariola uses the example of a bank that couldn’t innovate fast enough when its competitors made game-changing moves such as automating savings or creating smartphone check deposits. “When one bank advertised the feature of taking a picture of your check with your phone to deposit it, all the other consumer banks had to get this thing pretty fast. You need speed to differentiate in business, but the risk of failure is so much higher,” he said.

If a competitor innovates, even being third to market is better than failing to match them. Worse, however, is launching a defective software product, as consumers will loudly discard it and embrace the one that works, Ariola said.

Unfortunately, it may also be the case that testing just isn’t sexy, even after all the agile contortions of software development methodologists. Like fact-checking, “It’s hard to make it fun,” said Ariola. “But imagine being involved with an application that interacts with a broader set of capabilities. Traditionally, your hands are tied until your code and everyone else’s is complete. Imagine using simulation technology which allows you to test your component in a complete environment any time you want.”

Virtualizing the life cycle
That’s the promise of a new wave of “extreme automation,” according to Lanowitz, in which services are the latest target of life-cycle virtualization. Starting around 2005, “virtual lab automation” emerged for quality assurance teams from vendors such as Skytap, Akimbi (acquired by VMware) and Surgient (acquired by Quest).

Lanowitz characterized the market as being in its third generation, and it’s beginning to encompass the entire pre-production life cycle. Virtualized services include middleware or service-oriented architecture transactions, databases, mainframes, and other devices. Today, development or QA teams seeking to simulate these resources have four main vendor choices:
• CA Technologies, which acquired ITKO’s LISA technology in 2011. In 2007, ITKO was the first to market the term service virtualization.
• HP, which introduced HP Service Virtualization in 2011.
• IBM, which acquired Green Hat and its Virtual Integration Environment in 2012.
• Parasoft, which evolved its 2002 stub server product first into Parasoft SOAtest, and then into Parasoft Virtualize, launched in 2011.
#!A stitch in test time
Beyond the cost savings in eliminating physical testing infrastructure and labs, the biggest savings is in wait time, according to Comcast’s Jennings. Though his team took a year to implement its service virtualization solution, he doesn’t regret the choice to buy a tool for that purpose.

“We had a homegrown platform for SOA-based transactions in our ESB. We had seen the value, but to roll it out quickly and expand utilization, a decision was made to buy vs. build,” he told his webinar audience. A proof of concept included baseline tests with and without virtualization to show “not only are we more consistent and predictable, but we’re also representative of not just live data responses, but performance responses.”

The bottom line? In 18 months (starting in 2012), Comcast has seen a roughly 60% reduction in environment downtime for Jennings’ team, which translates to more than half a million dollars in savings. That speed-up is in line with the ROI Lanowitz found in a recent survey of service virtualization customers: Sixty-four percent of respondents had a 50% to 100% reduction in wait time, thanks to the greater availability of services.

“We’re able to go deeper in the testing we’re doing, go deeper into performance engineering, and expand our footprint of systems under test to third-party applications,” Jennings explained in the webinar. One risk associated with this strategy is that of not keeping the virtualization asset up to date. “There’s a maintenance aspect, but that becomes part of the life cycle,” he said.

That experience dovetails with Lanowitz’s advice to expand service virtualization beyond internal dev and test teams, exploding functional silos and bringing quality into the equation throughout the pipeline. “Take those virtualized assets that you’re testing with and give them to your software supply chain,” she said.

Use cases for service virtualization
Ariola outlined four major use cases for service virtualization. First, agile development teams may be working on highly integrated but separate products. “One component may be dependent on multiple components. You can simulate the interaction, isolate the project better and test more completely. It’s good for parallel development efforts or agile teams where you have shorter sprints and are trying to throw out complete products,” he said.

Second, performance testing shines when service virtualization “helps you have more control over your dependent applications,” said Ariola. “You can test load from virtual users and get very consistent dependent system responses. Not only can you get control over the environment, what you really want to do is test the corner cases.”

Third, mobile application development performance is an obvious area for service virtualization, testing for geolocation limitations, bandwidth, jitter, packet loss, communications protocols such as SMS, telephone, JSON calls, or REST-based calls—anything transactional.

Fourth, there’s end-to-end functional testing. “Traditionally, people were cutting off what they could do depending on the environment,” said Ariola.