Service virtualization has helped countless organizations perform tests on application components that live outside their development organizations, or that are not available to the tester when needed to complete their tests.

Virtualization enables organizations to put up a virtual service more easily than they can “yank a box on an Amazon server,” explained Shamim Ahmed, DevOps CTO and evangelist at Broadcom. Yet today, service virtualization (SV) can be seen as a life cycle technology, empowering what Ahmed calls continuous virtualization. This, he said, “enables even developers doing parallel development right now, just for testing. That’s on the left-hand side. And on the right-hand side, we’ve seen extremes, like customers using service virtualization for chaos testing.”

SV helped early-adopting organizations to decouple teams, said Diego Lo Giudice, vice president and principal analyst at Forrester, so that you could decouple customer with client. But, he noted, “with organizations being broken up into small teams, and parallelizing, the work with Agile became very hard. Project managers thought they could manage that. And there’s no way you can really manage a bunch of small agile teams working; making sure that you synchronize them through project management is impossible. And so service virtualization was kind of used a bit to decouple, at least from the testing perspective.”

Virtualization enables organizations to put up a virtual service more easily than they can “yank a box on an Amazon server,” Ahmed explained. 

So, where is service virtualization being used beyond testing?

Service virtualization use cases

Diego Lo Giudice, vice president and principal analyst at Forrester, said SV remains mainly a testing capability, though he said he is seeing an accelerated use of SV in the API world. “I haven’t really gotten, you know, beyond the typical use cases of testing unreachable or expensive third-party resources,” he said, noting that the biggest use case he keeps seeing is virtualizing mainframe environments. “I love the example a CEO gave me that he was saving a lot of money with service virtualization simply because one of his teams, for testing purposes, couldn’t access the mainframe. They only had a window of 30 minutes a month, and they had to wait every time for those 30 minutes. With service virtualization, they were able to virtualize that access to the mainframe, and therefore the team now kind of had the virtual access to the mainframe available all the time.”

Using service virtualization with APIs, Lo Giudice said, is “just one of the types of testing that needs to be done; integration tests, that activity that can be automated, software delivery pipelines. I see it a lot there.”

Among other areas where service virtualization is being seen is to create employee onboarding environments. Alaska Airlines uses Parasoft’s virtualization solution for its training, according to Ryan Papineau, a senior software engineer at the airline. With virtualization, he said, “we’re able to scale the amount of people that we have go through our training program.” While there are typically no test cases, Alaska can use the environment to see if the users can perform certain tasks, but none of that gets recorded or impacts the production environment. 

Service virtualization and test data management

But perhaps the biggest area of SV growth is in the test data management (TDM) testing space – a term that Papineau said is “kind of messy, because it can mean a lot of things.” It has become, in a word or two, a catch-all buzzword.

“We’ve been screening some new automation engineers, and they’ll put test data management on their resume. But you’ll never see any concept of any tools or techniques listed,” Papineau said. “What I believe that to be is they’re listing it, to say ‘Hey, I use data-driven tests and had Excel,’ and I’m like, that’s not what I’m looking for. I’m looking for data structures and relationships and databases. And that life cycle of creation to modification to deletion. And using an ETL tool, or custom scripts, which we use separately.” 

Papineau said that Parasoft’s solution essentially uses data and iterates it over APIs, records it and creates the relationships with the data. Papineau said, “You get this nice exploded, fancy UI that has all the relationships and you can drill down and do cloning and subsetting, so it has a lot of the old traditional test data management aspects to it, but all within their context.” 

Broadcom’s Ahmed added that his company, which acquired the Lisa SV software developed by iTKO through its purchase of CA, is seeing much more synergy between servers, virtualization and test data management. “When we acquired Lisa, TDM was not that big. But now with all this GDPR, and all the other regulations around data privacy, TDM is really hard. And it’s one of the biggest problems the customers are grappling with.”

Ahmed believes SV and TDM go hand-in-glove. “The way they work together, I think, is another key evolution of how the use of service virtualization has evolved,” he said. “Using SV is actually one of the easier ways to do test data management. Because, you know, you can actually record the test data by recording the back and forth between a client and a server. So that gives you an opportunity to create lightweight data, as opposed to using the more traditional test data mechanisms, particularly so for API-based systems.”

He noted that the use of SV reduces “the tedium burden,” because creating the test data for a live application versus creating the test data for an emulator is a much lower amount of TDM burden for the testers and everybody else.”

System integrations

While much about service virtualization has gone unchanged over the last years, much has changed, according to Lo Giudice. Developers are choosing open source more, deciding they don’t need all the sophistication vendors are providing.  “I’ve got data that shows the adoption of service virtualization has never really gone over 20%,” he said. “When you ask developers and testers, what is it that you’re automating around in 2022, I think the system integrators” are the only ones for whom this is key. 

“It’s actually very useful” in integration projects, Lo Giudice said. “If you think about Lloyds Banking, a customer that’s got a complex landscape of apps, and you’re doing integration work with good partnerships going on,” service virtualization can be quite beneficial. “If you’ve got an app and it interfaces another 10 big apps, you’d better use service virtualization to automate that integration,” he said.

Integration projects between assets held on-premises and those residing in the cloud caused some hardships for Alaska Airlines, Papineau said. The problem, he said, stemmed from internal permissions and controls into the cloud. One of their developers was taking older data repository methods and deploying the cloud, and struggled with the internal permissions between on-prem and the cloud.”

Papineau said organizations have to understand their firewalls and the access to servers. “Are your server and client both in local? Are they both in cloud order, and does one have to transverse between the other,” Papineau said. “So what we did there is we stumbled on getting the firewall rules exposed, because now all of these different clients are trying to talk to this virtual server. And so it’s like, ‘Oh, you got this one going up. Now you need to do another firewall request for this one?’ And I am not kidding you. When we did the Virgin (Atlantic) acquisition, viral requests were the largest nightmare in the longest time. So that’s why it’s an internal problem we struggled with and just gave up on it like, No, this is just taking too much time. This should not be this hard. This literally is a firewall overhead problem that we ran into.”.

Continuous virtualization

Virtualization is not something you do before you do testing any longer. From the time you start to do your backlog and your design, you have to think about what services you need, and how you design them correctly.

Then, according to Broadcom’s DevOps CTO and evangelist Shamim Ahmed, you have to think about how to evolve those services. “We think of service virtualization evolving and on the continuum,” he said. “You start with something simple we call a synthetic virtual service that can be created very easily – not using the traditional record-response mechanism.”

He noted that the old way of creating a virtual service relied on the fact that the endpoint already exists. That’s what enabled record and replay,  but in today’s development environment, the endpoint may not exist – all you might have is an API specification, and you might not even know whether the API has been implemented or not. “You need to have new ways of creating a virtual service, a very simple, lightweight service that can be created for something like a Swagger definition of an API. Developers need that when they’re doing unit testing, for example. The way we look at this is what we call progressive virtualization – that simple thing that we created can now evolve, as you move your application from left to right in the CI/CD life cycle.”

He offered the example once that application gets to the stage of integration testing, you perhaps need to enhance that synthetic virtual service with some more behavior. So more data is added, and then when you get to system testing, you need to replace that synthetic virtual service with the real recording, so it becomes progressively realistic as you go from left to right. 

“There’s a whole life cycle that we need to think about around continuous virtualization that talks about the kind of virtual servers needed to do integration testing, or build verification,” Ahmed said. “And of course, all the other kinds of tests – functional, performance and even security testing – virtual services are just as applicable for those things…  because if you think about the number of third-party systems that a typical application accesses in this API-driven world, you simply can’t run many of your tests end-to-end without running into some kind of external dependency that you do not control, from the perspective of functional, performance and security testing. So you can start to emulate all of those characteristics in a virtual service.”