Service virtualization has gotten the short shrift over the course of its lengthy history. Whether you chart its inception in 2002 with the release of Parasoft’s Stub Server, or in 2007 when CA took up the banner and market around the term, the entire concept has yet to even take on the status of buzzword.

That could be a good thing, however, as buzzwords can burn the ears of any manager distributing his or her budget for the year on new tooling for the team. Rather, service virtualization has remained a somewhat unknown but fairly reliable path toward saving developers time and money.

Theresa Lanowitz, founder of research firm Voke, said that service virtualization is proven to bring ROI to development managers. “We know the return on investment is tremendous. It really enhances that collaboration. Everything we’ve been hearing for the last 10 years is about collaboration between QA and development. Service virtualization takes those barriers down and lets the teams be completely aligned,” she said.

But what is service virtualization, exactly? Essentially, it manifests as a server within your test environment that can replicate the streams of information that make up the various services used in applications. In practice, this means simulating third-party services, internally hosted services, and even broken services for the purpose of testing against real-world traffic scenarios.

Why simulate these services? Robert Wagner, product owner for orchestrated service virtualization at Tricentis, said that the average enterprise is filled with services. “You lose a lot of time testing when you have complex business processes. On average, there are about 30 services in bigger companies.”

With at least 30 services to test against, it just makes sense to automate simulating those streams of data rather than trying to maintain separate codebases for testing versions of various services.

That being said, moving to a testing plan that includes service virtualization is not something that can be done over night. There are many ways to get started, but at the end of the day, the real way to succeed with service virtualization is to treat it as another process in your development life cycle.

Wayne Ariola, chief strategy officer for Parasoft, said that traditional IT is “used to adopting tools in an ad hoc manner, but service virtualization requires a process collaboration. It’s not magic: You have to put the time into it to get the value out of it.”

Once developers have adopted the practice, however, Ariola said they are able to “find bugs in [their] development phase, where everyone is developing their own components isolated from the others.”

Getting started
Building service virtualization into your software development life cycle isn’t nearly as difficult as spreading the capability to an entire organization, thankfully. Lanowitz suggested that the endgame can be intimidating for large enterprises, but the effort is worth it. “There are many organizations that say, ‘I am not ready for this type of thing.’ Ideally and ultimately, what you want is that for every piece of source code checked in, you want that virtualized asset to go with it,” she said.

Lanowitz suggested starting out small. “An easy way to start is by pinpointing what types of base components would benefit from virtualization. You could say, ‘For anything that is fee-based, we’re going to use service virtualization. What types of third-party assets do we use that we don’t own?’ Virtualize those third-party elements.”

Of course, the services don’t have to be external to warrant virtualization. Lanowitz said that an enterprise could also start out by virtualizing its core services—those that are used frequently across the organization. The more widely used the service, the more likely all the corners of the organization will come forward to take advantage of the virtualized version to test against.

Another way to get started is along your supply chain, said Lanowitz. “You could say, ‘We’re going to start with one project and work across our software supply chain and require everyone in the supply chain use service virtualization.’ ”

Stefana Muller, project-management leader at CA Technologies, said that starting out with service virtualization doesn’t have to mean testing it out on smaller projects. She asked, “What is your big transformational project? Find one project you can start with that can show you return on investment quickly. It will prove itself there. Customers are dealing with these constraints in other ways: by building throwaway code, wasting time waiting, and spending money to get things done quickly. The ways we help them achieve the benefit of service virtualization is we find the benefit that will change their business. Once you do that with one project, it’s very easy to expand to others.”

Indeed, the benefits of service virtualization are best felt when the practice is spread to an entire organization. This is because most of those services being virtualized are used by multiple applications, and thus virtualizing them can bring time savings to teams across the organization. But this can lead to complexity as your organization learns how to properly roll out service virtualization as a service itself.

Muller advocated for the creation of a center of excellence within the organization to help push the process through to the edges of the enterprise. “Once you get to a maturity curve, with four or five projects using service virtualization, you’re probably going to want to have a center of excellence so you can share virtual services among teams, rather than building one for each and every one. We sometimes use the term ‘Center of Competency,’ as the center learns how to derive value from service virtualization,” she said.

Whose money?
Perhaps the biggest impediment to service virtualization uptake in the enterprise is that it falls into one of those nebulous gray areas of budgeting. The QA team, the Ops team and the development teams all have their own budgets, yet service virtualization could fall into any of their laps as a responsibility.

Parasoft’s Ariola has his own opinion as to why this is. While he doesn’t speak for Parasoft on this topic, he is of the opinion that “There is no central entity within large development organizations who own the concept of quality. You have a center of testing, but those are usually tool-oriented. There’s this idea that quality is shared, which is great, but nobody owns the definition. If you start asking about non-functional requirements, it’s blown apart across so many different groups [that] it’s not necessarily true.”

Ariola partially blamed agile for this erosion of quality control in the enterprise. “Agile, although valuable, has blown apart the concept of quality because it focuses the team on the user stories in the timeframe they are due, versus thinking more about the programmatic quality it needs to hit as it goes toward production.”

To that end, said Ariola, service virtualization can help spread quality across the development life cycle by driving bug detection into earlier portions of the project. Rather than finding service integration bugs during systems integrations phases, they are found during the standard development process, he said.

Tricentis’ Wagner agrees that finding the right budget to pay for service virtualization has been tricky. “When we got started, the reason that we had problems was because we were focused mainly on test teams. It took a while until companies realized they could save a lot of money with service virtualization,” he said.

This was because the test teams typically relied on the Ops teams to build their environments. Despite trying to sell a tool that was useful to QA, as a server product, Tricentis found Ops was often the buyer that showed up at the table.

Once these companies realized that service virtualization was more appropriately categorized under the testing budget, they also realized they could replace their test labs, said Wagner.

He said that, compared to the cost of a test lab, service virtualization can offer vast savings. “It’s nothing compared to service virtualization. It’s much cheaper and much more flexible, and you can also do negative testing with service virtualization. You can go to your guys running this test lab and ask them to deploy a broken service so you can test negative scenarios,” he said.

Lines of control
One thing that service virtualization can also instigate is a drive for service orchestration in the enterprise. Red Hat’s OpenShift platform includes orchestration through Kubernetes, and thus can handle the deployment duties necessitated by many service virtualization efforts.

Joe Fernandes, OpenShift product director at Red Hat, said that OpenShift’s “latest incarnation, version 3, is completely rebuilt around Docker and Kubernetes. Docker is the container runtime and packaging format, and Kubernetes is the orchestration engine for managing the services and for determining where they should run and how they should run.”

Wagner said that “Orchestration, in modern applications, is really necessary because you have a certain business flow. This can’t come before that, and so on. This business flow needs to go off of a specific description of a business flow. Tricentis OSV can model business scenarios running on the backend over different technologies. OSV proves that the flow is in the right order and distributes the messages to the system where it’s intended to be sent. One difference we have [from] others is we model these business flows. You can run these over multiple systems and mimic stateful behavior.”
The end goal for service virtualization is, as with most tools and practices in software development, to save money and time for everyone involved. “Once you start crossing partners or groups, it becomes really valuable,” said Ariola.

“If you’re developing your system, and everyone is dependent upon system B, having a group of people accessing a simulated instance of system B and its instance is really valuable. They’re all testing against a set of assumptions, so that level of consistency allows for that level of acceleration. If you grow the breadth of your test suite, it allows you to test more in an end-to-end fashion.”

Bringing service virtualization into an enterprise may be intimidating, but once you get going, Lanowitz said it becomes a comfortable part of the development life cycle. “It’s not that difficult. Once you bring service virtualization into your environment, you can very quickly replicate those environments. Software vendors will say ‘It takes this long to create a virtual service,’ and those numbers are accurate,” she said.

Users of service virtualization, said Lanowitz, “All say once they use it in their workflow, they don’t even think about it. You’re able to test more, make changes more easily, and get something you might not think is ready or available yet into the testing cycle. This takes down the whole idea you can never do anything until you have everything, and you never have everything until you’re ready to deploy. Service virtualization gives you access to those things that are unavailable or incomplete.”

Lanowitz sees a bright future ahead for service virtualization. She said that she “hopes it’s going to continue to spread. We’ve done in-depth research on this 2015 and 2012. We saw the adoption rate increase, and as we move to the cloud, I would expect service virtualization would be part and parcel with a larger tool set you’d use, like release automation. We’ll see it integrated with development and test platforms. You might see it integrated with other tools along the way.”