Software development went and got itself in a big damn hurry. Agile practices combined with continuous integration and delivery practices have dramatically sped up the development life cycle, reducing project sprints from months to weeks to even days.
Testing, on the other hand, is not quick. Manual testing is a time-consuming and labor-intensive process to ensure a piece of software does what it’s supposed to, no matter how fast it was developed. The challenge for both developers and testers in a landscape increasingly dominated by agile is figuring out how to bring testing in line with the pace of development and delivery without sacrificing quality.
“When that bottleneck around deployment was taken away and all of a sudden code could truly flow naturally and smoothly into production, there was a step back in terms of quality,” said Matt Johnston, chief marketing and strategy officer of Applause (formerly uTest). “[Customers] would come to us and say we want to test one build per week. Then all of a sudden between the shift to agile and continuous integration, suddenly they had 10 builds a week. Dev and DevOps could move faster and QA wasn’t able to keep up, so organizations just decided to launch and see how it goes.”
In this shift to continuously integrated and delivered software, in which developers integrate code into a shared repository several times a day (enabling a change or version to be safely deployed at any time), testing is shifting toward a continuous model as well. Through a combination of test automation and the merging of development and testing processes under a Dev/Test philosophy, testing providers and QA teams are beginning to implement practices to test software builds as rapidly as they’re being churned out.
“In today’s world, it’s just too slow to be in this sequential process of dev, test, deploy and manage,” said Tom Lounibos, CEO of SOASTA. “Developers have always tested; they’ve always done unit testing, application development testing, but then they throw it over the wall to the QA guys who would do functional, load and other testing. Testing is now starting to be done by developers far more frequently. QA professionals are still very much there, but they’re trying to automate the process as well. That’s the big shift around continuous integration and continuous testing, so that speed can be improved without rushing software out the door.”
Automating the conveyor belt
Continuous testing doesn’t happen without automation. In traditional manual testing, a developer checks in a change and it goes through a build process that may take hours or days to produce feedback. Automation accelerates the cycle, checking the code and providing feedback in a matter of minutes. By no means does this signal an end to manual testing—which remains essential in exploratory and regression testing at the Web, UI and mobile level—but automation frameworks and tools are proliferating throughout the process.
“If there’s one overarching principle, it’s to automate everything,” said Steve Brodie, CEO of Electric Cloud. “That means you have to automate all the types of testing you’re doing. The key is orchestrating the pipeline: It’s one thing to have these silos of automation doing automated load or regression testing, automating builds or even deployments. But what you need to do is automate that whole end-to-end pipeline, from the time the developer checks the code, all the way through to production and deployment.”
Automation requires much effort on the part of the organization to do correctly. Machines need to be provisioned and configured, manually or virtually, and testing environments need to be spun up and deployed for each application. Yet Brodie believes a misconception exists around automated testing and the quality of those tests.
“You’re only as fast as the slowest element, the slowest phase in your software delivery pipeline,” he said. “You’ve got to automate the QA environment and the systems integration testing environment. You’ve got to deploy into the performance-testing environment. If you have user acceptance tests, you’ve got to deploy it there too. But a lot of people think that by deploying faster with Continuous Delivery, quality will suffer. But what’s fascinating is that the inverse is often true, particularly if people are releasing more quickly because the batch size is changing. The magnitude of changes you’re deploying is much smaller.”
Once deployed, an automation solution is not without its kinks. Automated tools are still relatively new, and the process can result in inconsistent reporting, false positives and botched execution. Hung Nguyen, CEO of LogiGear, said the idea is there, but the biggest challenge to automation is smoothing out the release process.
“Think of your entire development cycle as an automated assembly line,” he said. “Once you turn on the conveyor belt, you don’t have to worry about what’s going to come out the other side. But when you get to the system level of testing, thousands and thousands of test cases are running against these virtual machines, and it tends to have some timing problems. Open-source and commercial tools used in combination are just not robust enough yet, so you end up with a lot of so-called false positives and end up debugging.”
The rise of Dev/Testers
The ripple effects of Continuous Delivery are changing the way developers and testers work together, blurring the lines between the two roles and skill sets. As a consequence, developers are learning how to test, and testers are becoming entrenched in the development process. It’s the manifestation of a Dev/Test philosophy.
“Testers are moving in closer to the development side, embedded into teams with developers,” said Tim Hinds, product marketing manager at Neotys. “You see this a lot in Scrum and other agile development teams. The testers are well informed about what PaaS developers are working on proactively to design their test scripts accordingly. So that whenever the code has been written and needs to be tested, they’re familiar with what’s occurring and not just getting something thrown over the wall to them.”
Dev/Testers are upending the way organizations approach testing while adopting agile and Continuous Delivery practices. Organizations that in the past have invested in independent centers of excellence for testing best practices are transitioning to have testing resources sitting alongside development resources, and as a result, the role and skill sets of testers need to evolve to fill that Dev/Tester role of ensuring code quality within an agile team.
“There’s a lot more of a need for testers to understand the application architecture, understand the APIs,” said Kelly Emo, director of applications product marketing HP Software. “If you’re doing API testing, you’ve got to understand that API and that programming model, the underlying architecture. You may need to understand its interdependency with other components of that composite app.
“There’s this new hierarchy of testing, where you have testers sitting alongside developers doing more API testing or functional testing at the application level. Downstream you’re still going to have testers managing the regression sets or doing exploratory testing. They can be more of what people think of traditionally as a black box or manual tester. Those rules still exist, but now you have both.”#!A tester’s skepticism
In the shuffle of bringing testing up to speed with development, there is a danger of losing sight of what testing was originally intended to do. Magdy Hanna, CEO of Rommana Software and chairman of the International Institute for Software Testing, implored organizations to not so easily dismiss manual testing or discount the importance of practices such as regression testing in the rush to deliver software.
“With agile, Continuous Delivery and continuous integration, I get very concerned about overlooking the value of regression testing, which guarantees that things work,” he said. “Some projects and teams thought continuous integration would be a good way to eliminate or at least minimize their regression testing, which is always a stumbling block. Sometimes, in order to push the release to production faster, we overlook or undermine the value of regression testing. I’ve seen projects that actually deliver software faster by cutting down on how much final system, acceptance and regression testing they do.
“Let me make this clear: Continuous integration will never replace regression testing—regression testing by qualified testers, not by the developers, who understand the behavior of all the features supported in the previous iteration or sprint. As a developer, I only understand the feature I wrote and implemented. Don’t expect me to do a very good job in making sure that all the other features I don’t really understand are still working.”
Hanna is also wary of relying on automation tools driving continuous testing efforts. While manual testing requires a physical tester, scripts govern automation. In this push toward a faster life cycle, he is concerned about developers and testers losing sight of a project’s ultimate goals.
“In order for Continuous Delivery and integration to succeed, they rely heavily on individuals writing scripts for tools,” he said. “The scripts need to be written not only to test the feature being implemented or the feature you are implementing, but the feature we delivered a year ago still has to work.
“There’s always trade-off. Delivering high-quality systems fast means cutting corners, and cutting corners in Continuous Delivery has affected the most critical aspects of the projects: the requirements. I can get developers to write code very fast and push code into production, but what does the code do? Why are we forgetting that we’re only writing code to implement a feature, a requirement or a behavior that the customer wanted?”
The first inning
While the growth of agile and the rise of Continuous Delivery and integration are tangible, continuous testing is still in its infancy. Organizations are still figuring out what it is, and both developers and testers are still in the process of grappling with not only how accelerated testing affects them, but also how to automate it effectively.
“From the standpoint of implementation versus awareness, we’re in the first inning of a nine-inning game,” said SOASTA’s Lounibos. “Awareness is pretty strong. It feels a little bit little 2009 and 2010 in cloud computing. Everyone was talking about it, but there weren’t that many people implementing. Early adopters are out there, but people have to get familiar with what continuous testing even means: How do they implement it? What are the best practices?”
The early adopters are the ones who, according to Applause’s Johnston, are phasing out things like centers of excellence and large outsourcing contracts—the equivalent of a large standing army—for a nimble Special Forces unit, the integrated developer and tester teams implementing automated continuous testing.
“The companies that are trotting out the same playbook of mainframe to desktop and desktop to Web applications are in the tall grass, completely lost in the weeds,” he said. “That’s what it takes: Wiping the whiteboard clean and saying ‘Okay, all the muscles we’ve built in the past 15 years from Web, a lot of those don’t really apply. The big investment we made with this vendor or that longtime outsourcing relationship or that Center of Excellence we thought we’d be using for 30 years, that’s either not going to be a part of the solution as we go forward, or just a part.’ ”
As adoption climbs, testing in a continuously delivered environment is also moving away from a development and testing process partitioned into silos. Think of the developer cliché where someone slides a pizza under the door and out comes code. As developers and testers hop the fence, testing is moving toward a more integrated and virtualized process aligned with a continuous ALM solution.
“Instead of people talking about wanting to automate tests, about hooking virtualization capabilities into a development tool, you’ll see much more of a hub that can deploy and take advantage of what happens when you put automation and virtualization together,” said HP Software’s Emo. It’ll enable automatic provisioning of virtual services you’ve discovered from your application architecture and make it available for your tester. Once the defect is found, you can automatically roll up that defect combined with a virtual service so your developer has a single environment to work with the next day.”
Automation. Virtualization. The amalgamation of developers and testers in a more fluid, concurrent software development life cycle. They’re all elements in the shift to continuous testing, which if SOASTA’s Tom Lounibos’ vision comes to fruition, may resemble something like “The Matrix.”
“Picture that concept of living in a world that’s actually a computer program, and if we’re in a meeting of 10 people, only two are real and the rest are computer generated,” he said. “That’s how we see testing in the future: a test matrix. There’ll be real people on your website or application, but there will be a constant flow of fake users anticipating problems of the real ones. Imagine virtual users trying to get ahead of real users’ actual experiences. That’s where continuous testing is going.”#!Best practices for continuous testing
As organizations and testing providers transition from manual to continuous testing, a new set of best practices is vital in keeping testing teams on track, optimizing resources and delivering a working application at the speed of agile.
• Daily, targeted testing: Gigantic, exhaustive tests are ineffective. Daily load tests in low volumes of concurrent users can help uncover smaller scaling issues, and targeted sample testing of software on various OSes, devices, carriers and applications is more effective and cheaper than running through thousands of test cases in every single environment.
• Test in production: Rather than testing in a controlled lab setting, testing in production (while real users browse a website or application) gives the most accurate indication of how a piece of software will perform.
• Scale test volume: Break a test suite into smaller chunks of tasks running in parallel to the automated deployment. This makes the code easier to execute and debug without human intervention.
• Diagnose the root cause: A test passing, failing, or producing a critical bug report is less important than finding the root cause of the failure in the code. Testers diagnosing the root cause stop engineers and testers from wasting time and resources tracing symptoms.
• Don’t lose sight of SLAs: Putting service-level agreements on a task board or list of constraints (so that every time a test or build is run, testers know what SLAs the new application, features or functionality have to pass) will keep application quality up while maintaining development speed.
• Nightly and end-of-sprint testing: Continuously integrated builds undergo automated testing whenever a developer pushes code to a repository, but running larger tests at specific times is still valuable. During a nightly build, run a full site or application load test for whatever you expect the user base to be at any given time. Then, toward the end of an iteration or sprint, stress the application to its breaking point to set a new bar for how many concurrent users it can handle.
• Hybrid Tester/Architect: A test architect aligned with the application architect can help determine, based on the application footprint, what the next automated components and test assets should be, to better manage the overlying test framework and promote use of reusable automated assets whenever possible.
• Don’t sleep on metrics: Metrics ingrained within the automation process can create quality gates to maintain a well-defined quality state. Without measuring how automated tests are performing to make actionable improvements, testers run the risk of promoting defects faster through the testing pipeline.
• Practice, practice, practice: Virtualization is the testing equivalent of a flight simulator, allowing simulation of every possible user experience. The better understanding developers and testers have of where problems may occur, the more prepared they’ll be.
Subtle benefits and hidden obstacles
Everyone knows Continuous Delivery and testing speed up the development cycle. Everyone knows you need to automate. Neotys’ Hinds and HP Software’s Emo laid out a few of the advantages people wouldn’t immediately associate with continuous testing, and some of the more subtle challenges to doing it right.
Benefit: Avoiding late performance problems: “It’s always cheaper to make changes earlier in development than to have something deployed to production and going back to add a hotfix. It also allows people to make sure that whenever you’re releasing new features into production, you’re not allowing any sort of performance regression; not allowing old bugs to creep their way back in.” —Hinds
Benefit: Mitigating technical debt: “If you’re seeing load, performance or security issues early on, you’re likely not to let them propagate or let them get consumed in other composite applications. It also creates an interesting conversation between your tester, your developer and your product manager really pushing on those user story functions, really pushing on the requirements.” —Emo
Challenge: Shorter development cycles: “When moving performance testing to continuous environments, testers need to adapt. You’re getting a new build way more often than you are in a more traditional waterfall environment, though you’ve got to do basically the same number of tests you were doing before, except now you’ve got them every two weeks or less.” —Hinds
Challenge: Skill set: “Making sure you have the folks with the level of understanding needed to be able to do this kind of testing, but also to engineer the process, the infrastructure. There is a special skill in being a really good tester. They need to put in place the continuous integration process connected to your test automation suite and connect it back into your ALM system so you know the results the next morning and you’re able to act on it.” —Emo