Using outside components? If so, you better test them, even if they came from the most reputable open-source project or commercial component provider you know. If you’re not testing components, especially within the context of other components required for your application and the environment in which your application will run, expect to find defects in production that could have been avoided easily and cost-effectively.
“We did some research recently [about] release management and what we found is people are more concerned about quality than they are time to market,” said Theresa Lanowitz, founder of analyst firm voke. “This is the first time we’ve seen the switch.”
In the voke 2015 Service Virtualization Snapshot Report, most of the participants said that dependencies were delaying releases. Eighty-one percent said dependencies slowed their ability to develop software, reproduce a defect or fix a defect. Eight-four percent said dependencies negatively affected QA’s ability to begin testing, start a new test cycle, test a required platform or verify a defect.
Such delays can lead to quality issues if elements of testing are skipped to save time or if testing is executed inadequately.
“If a development team is dependent on a component yet to be built, they’re not going to test it,” said Marc Brown, CMO at Parasoft.
Service virtualization solves that issue and many others.
What about mocks and stubs?
In the absence of service virtualization, developers can create mocks and stubs to simulate what will likely happen in production, but the tactics don’t always yield accurate results. As the sophistication of components and interactions increases, the accuracy of what’s being emulated can decrease and it becomes increasingly expensive for the team to create and maintain the mocks and stubs.
“Mocks and stubs are one way to deal with some of the basic elements, but it’s not going to scale. It creates more overhead and potentially more risk for teams,” said Parasoft’s Brown. “You’re not going to be able to do certain things that you could do with service virtualization.”
Harsh Upreti, product marketing manager at SmartBear, said the main reason his customers want service virtualization is to move beyond basic mocking.
“What happens is you have a lot of dependencies on other teams, other products and their APIs,” he said. “Some of the APIs may not be relevant because they are still under development or they’re a little bit costly because maybe you’re hitting a Google Map that costs you $50 for every 1,000 calls,” he said.
The benefits of service virtualization increase when development and testing are using it to access the same systems. Specifically, developers can prevent more defects in the first place, and QA can perform end-to-end testing.
voke’s survey found that dependencies were negatively impacting software release cycles and quality. On average, respondents had 53 dependencies. However, 67 percent reported unrestricted access to only 10 or fewer dependencies.
“The reason why you need service virtualization is that it completely cuts dependencies across the board,” said Aruna Ravichandran, VP of DevOps Product and Solutions Marketing at CA Technologies. “Developers no longer have to wait for systems to be available because each of those back-end calls can be automated.”
Get access to more resources
Service virtualization enables developers and testers to test against resources that either are unavailable, rarely available or incomplete. For example, access to a mainframe may only be possible during certain hours. Service virtualization enables developers and testers to avoid all that.
“What it enables you to do is run a complete end-to-end test at any time throughout any aspect of your software lifecycle so a developer can say, ‘Let’s see what this looks like end-to-end if we had all these things,’ ” said voke’s Lanowitz. “What does it look like for performance, functionality and anything else we’re trying to test against, so the ability to access components, services, systems, architectures, sensors, mainframes, databases and the list goes on.”
Even if resources are available, time and cost can get in the way. For example, if a developer is building an application that requires connections to an ERP system and a credit card system, the developer has to work with IT to make sure the systems are properly provisioned and that testing can be done with the credit card system. Testing that involves third-party systems can cost money, whether its testing fees or setting up a real-world test environment.
Still, teams trying to cut costs have been known to adopt service virtualization, cut it in an attempt to save money and then readopt it because cost of service virtualization was outweighed by the economic and time-saving benefits it provides.
Blind faith is dangerous
Developers’ testing responsibilities have continued to grow as more types of testing have “shifted left.” Meanwhile, many commercial component providers have gone out of their way to deliver stable components so developers can use them with high levels of confidence. Still, the reliability of a component doesn’t depend only the component itself.
“A component may work fine independently, but what if they’re not tested together?” said voke’s Lanowitz. “What if you have Component[s] A, B and C and Component A has been tested 100%, Component B has been tested 80% and Component C has been tested 80%, but when they’re combined they don’t work together?”
Using service virtualization, developers can emulate such conditions so they can better understand how a component would actually behave in production.
“Many components provided by the open-source community or third parties can have security, performance or load-related issues. Just look at the number of systems that have gone down and created some sort of cost,” said Parasoft’s Brown. “There’s a business cost or liability or a brand-tarnishing issue. I wouldn’t trust anybody right now.”
In the absence of service virtualization, production data also may be impacted in some unintended way. Brown said Parasoft has seen some issues in the banking industry where people were testing against live production data and some of the production data made it to development. The data also found its way to other areas, which meant that customers’ credit card numbers weren’t actually secure.
Security is a very real issue and one that continues to become more important every day. Components built in the past may have been built at a time when security threats were not as pervasive, severe or varied as they are today. Although people want to trust the components they use and avoid coding something they can get from a commercial vendor or the open-source community, there’s no substitute for testing. Hackers continue to devise more sophisticated ways to compromise software.
“If I adopt a component, I really need to make sure that I’ve got some reusable assets that can help me validate those components fully so I can have the level of confidence I need without slowing things down,” said Brown.
Accelerate delivery without sacrificing quality
Fast access to virtual resources is better than slow access or no access to actual resources. With service virtualization, development and testing teams can work in parallel, which saves precious time.
“Our customers tell us they used to wait almost a third of the time for the development teams to get APIs [to testing],” said SmartBear’s Upreti. “Now they’re available immediately so [the testing team] doesn’t have to follow up with [the development team]. They work faster and there are better relationships between team members. It’s creating better conditions to work in software development teams.”
Vodafone New Zealand, a Parasoft customer, found it harder to deliver reliable software due to increasing customer expectations and software complexity. Part of the problem was the company’s acquisitions of other businesses, which resulted in more systems and dependencies that further complicated software updates.
To ensure the new functionality operated properly and it didn’t damage existing functionality, development teams needed to test their work and third-party components in realistic test environments, which was too costly and time-consuming to do using actual systems.
AutoTrader mimics reality, saves money
AutoTrader, one of CA’s customers, was able to test across devices and avoid $300,000 in test hardware and software costs. Its website, AutoTrader.com, is used by more than 18 million people per month who are researching, selling and buying cars. A decade ago, the company was releasing just four web services per year. Now the company is under pressure to deliver a release a week. Meanwhile, the number of devices and versions of devices and operating systems customers are using has grown, complicating testing.
“When I talked to them about their application strategy, one of the key things they shared with us [was the desire] to provide a seamless service across devices,” said Ravichandran. “Service virtualization gave them the ability to test new features, apps, and third-party components across multiple devices.”
AutoTrader was also able to reduce software defects by nearly 25 percent and it reduced testing time by 99 percent.
Generally speaking, service virtualization is a good way to reproduce and reduce defects.
“One of the biggest problems is that something will work fine on a developer’s machine, but then it gets into production or test and there’s a problem. The defect can’t be reproduced,” said voke’s Lanowitz. “With service virtualization, you have access to that production-like environment so you can accurately and realistically reproduce the defects and you can do economical testing of realistic behavior such as performance which is one of those non-functional requirements we overlook.”
Using service virtualization, software teams can reduce the number of defects pre-production and in production while increasing test coverage and reducing testing cycle time and release cycle time.
“Ideally, you want to get to the point where when it comes time to check in your source code, you’re checking in virtualized assets with it,” said Lanowitz.
The IoT will drive more demand
The IoT is giving rise to even more complex ecosystems that need to be tested and because they’re so complex, it’s impractical if not impossible to test all the possible scenarios without using service virtualization.
“Service virtualization allows you to virtualize components in the world of system of systems, which is critical,” said Parasoft’s Brown. “You can virtualize an embedded device, services, sensors and [outside] components.”
Beyond that, service virtualization allows developers to contemplate abnormal conditions that wouldn’t otherwise be apparent without access to the actual physical systems. Because so many things can go wrong in an IoT or IIoT scenario, it’s critical to understand normal and abnormal behavior, such as what effect different types of loads have.
“As we move into the Internet of Things, if you’re not using service virtualization, you’re not going to keep up with everything we need to do,” said voke’s Lanowitz. “You have to be constantly testing, making sure things are performing. You need to make sure you have the availability and everything you need to test what’s going on inside that thing.”
Where to start?
Some companies haven’t adopted service virtualization yet because they don’t know where to start – development or QA? On the other hand, that may not be the right way to frame the problem.
“I always recommend starting with those fee-based systems you have to pay to access for testing or start with a small project where you have a good rapport between your developers and your testers because your testers are going to benefit from service virtualization,” said voke’s Lanowitz. “There are a few things you can do. You can say anything we’re using in the enterprise, anything in our core logic we should use virtualized assets for.”
In one case, service virtualization worked so well that virtual assets were accidently deployed instead of real assets. However, the problem was found and fixed immediately, Lanowitz said.
Component testing is just the beginning
Today’s developers need on-demand test environments for continuous testing and on-demand testing.. Already, service virtualization has become a foundational element for Agile teams and DevOps teams that need continuous testing capabilities.
In line with that, Parasoft’s Brown expects more SaaS vendors to create test components and perhaps a reusable virtual service that goes along with them.
“We’d love to power people developing software components because it will make their applications better, high quality and less prone to security exploits,” he said. “At the same time, they might be able to differentiate their own products by shipping a component or virtual service that goes hand-in-hand with it that people can test against.”
Component testing is just one of many things service virtualization enables. In the voke survey, participants were asked what they were virtualizing. Participants said they were virtualizing web services, APIs, mobile platforms, embedded systems, IoT-types of sensors and components.
voke views service virtualization as a subset of lifecycle virtualization, which includes service virtualization and virtual or cloud-based lab technology so the environment is as close to a production environment as possible. A third element is test data virtualization that can be shared across a software supply team so companies are not impacting the safety and security of customers by sharing real-life production data and they can avoid shipping terabyte-sized files across the network to teams that may need production data for testing. Network virtualization is also included in the mix so teams can simulate a network and different use cases, such as what happens to a banking transaction if a user goes into a subway. The final element is defect virtualization.
“We’re always going to have defects and we either discover those defects in pre-production or we discover them in production. We need a way to know what defects are in our source code or legacy source code,” said Lanowitz. “Using defect virtualization software in the background, you can understand the point of application failure and where the defect is so you can fix it.”
Meanwhile, current users of service virtualization should endeavor to drive more value from solutions by ensuring that virtual assets are available throughout the software lifecycle, which will result in additional time savings and costs.
Using service virtualization can give you more confidence in the components you’re using in your software and you’ll be more confident about the quality and stability of the software you’re building.