09Continuous

The long-distance runner rounds the track and starts down the final 200-meter straightaway, but stumbles dramatically. The back of the pack has nothing left to give. The leader lopes ahead effortlessly. Meanwhile, a surprise surge propels an athlete past four frontrunners. She slides across the line in time to win the silver medal.

The Olympics are an apt metaphor for peak performance of software deployment and release. Runners spend years circling the track, becoming ever more efficient at their sport. They work their way up through regional, national and world competitions, practicing delivery under pressure. But only the best make it to the Olympics, and once there, some fail ignominiously while others show that strategy is crucial when it comes to the final heat.

Avoiding dramatic blowouts in the “last mile” of the software life cycle has gained popularity thanks to David Farley and Jez Humble’s book “Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation.” Several new tools focus on elements of the life cycle after development and testing. And in 2012, the focus is increasingly on deployment, as open-source continuous integration tools such as Jenkins gain popularity.

Even the distinction between deployment and delivery is being actively debated, along with the utility of canary releases and feature toggles. These are all signs that the DevOps movement—bringing operational concerns into the early parts of the requirements and development cycle—is gaining mainstream awareness.

But like the weekend warrior whose training plan falls far short of competitive standards, is the average software team even ready to dream of continuous delivery? Has its prerequisite—continuous integration and build management—been met?

What continuous integration isn’t
A recent report by analysis firm Voke on agile that found only 49% of enterprises were using continuous integration. What’s worse, far too many teams are doing it wrong:
• It’s not just running a tool. “When people say they do continuous integration, what they actually do is they’re running a continuous integration tool, and that’s not the same as doing continuous integration. Crucially, continuous integration is a practice, not a tool,” said Humble. And what is that practice? “The essence of it is, you’re always pushing into mainline, into trunk, and working off trunk.”
• It’s not nightly builds. Development teams shouldn’t deceive themselves, Humble said: “A nightly build is not continuous integration. If you are not checking into trunk at least once a day, if you don’t have a suite of tests that runs in a few minutes, if you don’t pay attention when the build goes red and make it the highest priority of the team to fix it now, you’re not doing continuous integration.”
• It’s not hours of tests. Even the most extreme continuous deployment (see Timothy Fitz’s report, “Doing the Impossible 50 Times a Day”) depends on a test suite that takes just nine minutes (albeit distributed across 40 machines). At the other end of the spectrum, too many companies have no comprehensive automated testing in place.
• It’s not sifting through failure reports. “Friends don’t let friends grep log files,” said Anders Wallgren, CTO of Electric Cloud. “If you’re running a two-hour test suite and having to carefully watch the output, you don’t have to. We’re watching those builds in real time and can pinpoint what the problem is for you.”
• It’s not just running build and unit tests on a feature branch. According to Humble, continuous integration only applies to regular merges into the centralized code repository, or trunk, a subject of some controversy among developers fervently embracing distributed version control systems such as Git.

About Alexandra Weber Morales