I recently read a short book called “Continuous Testing.” The authors present the case that we should focus on assessing the business risk in our software, and make continuous real-time decisions about the tradeoffs involved in improving software quality.
More about that in a moment, but first, let’s talk about the whole “Continuous” meme.
Remember when we did software builds? We ran the compiler and linker manually. Then we automated our build process. When we first started talking about agile methodologies like Extreme Programming (XP) and Scrum, we accelerated the timelines, so nightly builds became the norm. Soon even that was too slow, and Continuous Integration appeared.
In the words of Martin Fowler’s seminal essay, “Continuous Integration,” written in September 2000:
We are using the term Continuous Integration, a term used as one of the practices of XP (Extreme Programming). However we recognize that the practice has been around for a long time and is used by plenty of folks that would never consider XP for their work. We’ve been using XP as a touchstone in our software development process and that influences a lot of our terminology and practices. However you can use continuous integration without using any other parts of XP—indeed we think it’s an essential part of any competent software development activity.
There are several parts to making an automated daily build work.
• Keep a single place where all the source code lives and where anyone can obtain the current sources from (and previous versions)
• Automate the build process so that anyone can use a single command to build the system from the sources
• Automate the testing so that you can run a good suite of tests on the system at any time with a single command
• Make sure anyone can get a current executable which you are confident is the best executable so far.
The principles of Continuous Integration haven’t changed much in the past 14 years, though of course methodologies have evolved, build and CI tools have become incredibly sophisticated, and test automation has become core. Indeed, some agile methodologies, like TDD, place testing at the center of the universe. As Scott Ambler writes in “An Introduction to Test Driven Development (TDD)”:
Instead of writing functional code first and then your testing code as an afterthought, if you write it at all, you instead write your test code before your functional code. Furthermore, you do so in very small steps—one test and a small bit of corresponding functional code at a time. A programmer taking a TDD approach refuses to write a new function until there is first a test that fails because that function isn’t present. In fact, they refuse to add even a single line of code until a test exists for it. Once the test is in place they then do the work required to ensure that the test suite now passes (your new code may break several existing tests as well as the new one). This sounds simple in principle, but when you are first learning to take a TDD approach it proves require great discipline because it is easy to “slip” and write functional code without first writing a new test.
That leads us to Continuous Testing. It’s not a new phrase; one might call it a somewhat overloaded phrase. A short blog post by Scott Johnson at Puppet Labs, “No Nasty Surprises: Continuous Testing & Continuous Integration for Successful Release Management,” focuses on Continuous Delivery, and states that:
Whether you use simple testing tools or complex ones, the message is clear: Test early and test often, and make sure you incorporate load and performance tests into your continuous integration process.
#!That’s not what Parasoft’s Wayne Ariola and Cynthia Dunlop mean in their 40-page book, “Continuous Testing.” They say:
Continuous Testing provides a real-time, objective assessment of the business risks associated with an application under development. Applied uniformly, Continuous Testing allows both business and technical managers to make better trade-off decisions between release scope, time, and quality.
Generally speaking, Continuous Testing is NOT simply more test automation. Rather, it is the reassessment of software quality practices—driven by an organization’s cost of quality and balanced for speed and agility. Ultimately, Continuous Testing can provide a quantitative assessment of risk and produce actionable tasks that will help mitigate these risks before progressing to the next stage of the SDLC.
Continuous Testing can help your organization answer the following questions at the time of the critical “go/no-go” decision for a software release candidate:
• Are we done testing?
• Does the release candidate achieve expected quality standards?
• What are the quantifiable risks associated with the release candidate?
• How confident are we that we won’t end up in the news for software failures?
Ariola and Dunlop nail the target: It’s all about risk. That’s what insurance is all about, that’s what attorneys are all about, that’s the sort of decision that every business and technology manager makes all day, every day. We have to live with risk and make tradeoffs. More testing? At some point, indeed, we have to cut it off.
It’s difficult if not impossible to assess the business risk of software quality. Yes, software quality is expensive. The higher the quality, the more time it takes to deliver software, and the greater the resources you must spend on software quality. And yes, it is expensive to have software failures—you might lose money, lose customers, suffer lawsuits, damage your brand, end up on the front page of The Wall Street Journal. Not good.
Parasoft’s objective here is to build awareness for the company’s Service Virtualization API for test environment provisioning and test execution. As the company wrote in September 2013:
Parasoft announced the general availability of the new Parasoft Service Virtualization, an open, automated infrastructure for continuous testing. With the latest release, teams can provision simulated test environments and launch associated test scenarios via a readily accessible API. This enables fully automated continuous testing, which is a critical component of continuous delivery and continuous release. Even if the team needs to launch a myriad of different test environments, each of which runs an extensive set of test scenarios, all the necessary provisioning and test execution can be fully automated.
Beyond promoting tools and the new “Continuous Integration” buzzword, Ariola and Dunlop make a good point in their short book: We mustn’t accept that the trend toward accelerating the development process will magically improve software quality; indeed, we should expect the opposite. And if we are going to mitigate risk in today’s environment, we need to reengineer the software development process in a way that considers business risk to be one of the metrics, along with the other traditional results of our automated testing and Continuous Integration systems.
What do you think of Continuous Testing? Write me at alan@camdenassociates.com.
Alan Zeichick, founding editor of SD Times, is principal analyst of Camden Associates.