In days past, there was one sure-fire, always-working plan for building software: start the build, then go get some coffee, or even lunch. Sometimes, building even meant it was time to go home for the day. But in a world now dominated by continuous integration, cloud-based hosting and real-time application monitoring, building and deploying software has become its own discipline.

Credit DevOps. Thanks to the newfound combination of developers and operations teams, building and deploying a critical application can now be a group affair. Or even better, an automated affair. No matter how it has evolved in your organization, build and deploy-time software and tools are key enablers for velocity, and it is velocity that has driven both the agile and DevOps movements.

Anders Wallgren, CTO of Electric Cloud, said that new technologies like cloud hosting and virtualization have allowed the full promise of continuous integration to be realized in ways that weren’t possible just five years ago. “A lot of that has to do with virtualized environments. It is, in some ways, harking back to what CI originally wanted to be about, which was just to be continuously ready to ship your product, and always have it ready to go,” he said.

In fact, this newfound focus on the same overall goal could be seen as being the whole reason for the DevOps movement. In days past, operations and development had entirely different goals: Developers were always racing to meet business application needs and repair bugs, while operations was in charge of making sure no one from development (or anywhere else) harmed the integrity of the core systems being hosted. This is why operations, traditionally, acted as the gatekeeper to new servers and application deployment.

continuous integrationToday, however, continuous integration practices dictate that developers focus on adding features as fast as possible, and operations’ main job is to maintain the infrastructure that allows developers to create self-service, fast-moving deployments. Everyone is worried about the speed of change, and everyone wants to go faster.

“IT is getting more involved,” said Wallgren. “That depends on how you define IT, but if you think of it as operations, those barriers are getting broken down quite a bit. I think the DevOps movement is doing for development and operations what agile did for development and QA: making them work much more closer together. CI takes down the barrier between development and operations. You don’t just chuck the code over the fence, and off they go working on their problem: Everyone is looking at it as one single problem, and you have to work on it together.”

Lothar Schubert, senior director of product marketing at CollabNet, said, “The underpinning of DevOps is continuous integration. CI is not driving the need for DevOps, but it is the underpinning of DevOps. DevOps without continuous integration or continuous deployment is hard to do.”

Laurence Sweeney, vice president of enterprise transformation at CollabNet, said the agile movement pushed developers to go faster, but it’s the technology that has enabled them to deploy every day. “You’ve got agile, which arguably began in 2000 with the Agile Manifesto formalizing things we did in the late 1990s. As software developers, we went from a point where we were going to be building and releasing once a quarter, to building and releasing once a month. There were teams that thought they were doing well by building once a week, or once a month in some cases.”

But even once a week isn’t fast enough for most businesses these days. With business needs changing almost daily, and new products launching all the time, any company could benefit from continuous integration and build. In fact, velocity is so powerful that many companies who implement CI would never be able to return to a waterfall process.

“Continuous integration is one of those things that, when you experience it, you don’t want to stop doing it,” said Sweeney. “It really is the only way to build software these days. Most people have internalized the fact that you have to do CI.”
#!
The push from tools
All this kerfuffle over going faster and deploying daily didn’t arrive overnight. It was enabled by the numerous new tools that have taken over software development and IT operations over the last decade.

“The value really became affordable only relatively recently, thanks to things like virtualization,” said Sweeney. “IT departments really need to get their heads around this. If they think they can constrain this thing, even if you can somehow prevent them from going out and buying cycles on a public cloud, you struggle to stop them from using a virtualization technology.”

Perhaps the movement toward continuous integration and deployment owes as much to developer impatience as it does to operations’ adoption of new technology. Indeed, with Amazon Web Services available as a quick and dirty alternative to the corporate IT provisioning policy, operations had to get faster and easier, for fear of losing internal hosting duties to numerous external cloud providers who never say, “You cannot have that server for use until next week.”

The newfound ability to easily and quickly provision and deploy servers has led to many more possibilities in the build-and-deploy chain of command, for example. “I was talking to a customer who was explaining that their CI process will send private e-mails as part of static code analysis, with style checkers, and instead of blasting it out to a board or something, they get a private e-mail that says, ‘Please go fix it.’ If you ignore it for a week, your name starts getting published,” said Sweeney.

Keeping code quality high is always a priority for development managers, but when applications are being built and deployed daily, maintaining that control requires a lot more processes and automated tests in the pipeline. Brad Hart, vice president of product management at AccuRev, said that “when the code leaves the developer’s fingertips, it’s the least mature. On the way to production there’s a lot of steps along the way. Deploy runs the gamut from deploying to a DevOps environment to deploying to a test environment.

“When the developers are working on a feature, they submit that work, we see people take those changes that are being promoted, automatically sending off a CI build so they can build it and run some quick unit tests, and deploy it to a staging server. Those changes are tied directly to the SCM system, and are associated with a story. Those changes move to the next level of hierarchy. When that is done, people are automated. There’s a lot of automation we’re starting to see around this.”

And this all ties back into automating the build and test processes, said Hart. “The idea is that you want to automate the whole process, and in order to automate it you have to have control over the different levels of maturity of the code. If you have an eight-hour build, you don’t want some random developer code checked in to pollute that,” he said.

Keeping code separate is a key to unlocking velocity, said Hart. Having multiple stages in the build-and-deploy process means that each day’s worth of code can bake on its own, keeping bugs spaced across separate developer commits rather than having them all bunched up in one big build at the end of the week, he said.

“Days will go by without QA having a good environment to test against,” said Hart. When code written this afternoon is constantly mixing with untested code written last week, it can seriously damage the process, he said. In such cases, it is important to divide the pipeline to production into numerous steps to avoid mixing untested code with tested code. “We work with people to build that pipeline and keep the raw code away from production. At NASA, that’s 10 stages.”

Not just playing games
And that’s not just something that applies to teams working on continually evolving projects. Even traditional packaged software can benefit from CI and increased velocity. Even when you only have to ship once, as in the videogame industry, continuous integration can keep schedules intact and applications stable.

Eyal Maor, CEO of Xoreax, has brought such benefits to developers who are working on Nintendo, PlayStation and Xbox games. For these developers, IncrediBuild now supports the Xbox One and PlayStation 4, even before those consoles have shipped.

“We started with the Nintendo Wii, but even before we had compilation with PlayStation, which was part of the Visual Studio package we had back then,” said Maor. “Eventually, we added support to other platforms from Nintendo, Sony and Microsoft. With this new release for PlayStation 4 and Xbox One coming out, we have worked with those guys to offer developers the ability to accelerate the development of games even before the consoles themselves are out there to support it.”

Maor said that development has changed a great deal in the past two decades, and as a result, building software isn’t just about the compilers anymore. “Fifteen years ago, we didn’t have so many tools and tasks taking time from the development,” he said.

“Today, because they’re adding more and more requirements, and because there is much more Q/A—code analysis and unit testing and a lot of other tools people are using for their software—we see more and more demand for many more tools coming out within ALM, and compilers and compilation is taking a smaller part, if you take a look at the overall life-cycle management.”

Maor said this shift in focus toward process and away from pure compilation hasn’t been exclusive to the games industry. “I used to work in the enterprise space,” he said. “When I looked at those applications when coming to the development side, it reminded me of the exact evolution we have seen in the endpoint space, where people need to show they have processes in place. You have so many services in place, like CRM and BPM, and it’s not just ERP anymore. In R&D, it’s no longer just development; it could be rendering, it could be CAD, it could be test. It’s all part of one long process to deliver to your customers.”
#!
Configuration conflagration
For developer and operator alike, there is one area of ALM that has completely changed in the last five years: configuration management. While once confined to CFEngine and a few other commercial tools, the modern world of configuration management is dominated by Chef and Puppet. New tools are also cropping up, like Ansible and Salt, both of which approach the configuration-management problem from specific angles to address shortcomings of existing solutions.

That’s exactly how Puppet got started: Founder and creator Luke Kanies was fed up with CFEngine and decided to write his own configuration-management system from scratch. Five years later, Puppet’s grown large enough to host its annual developer conference at San Francisco’s super luxurious Fairmont Hotel.

One area where Chef and Puppet are both trying to grow is around service management and orchestration. As applications increasingly encompass many servers running in parallel, the need to configure larger swaths of machines instead of individual servers becomes more important.

“Orchestration needs vary considerably from one customer to another,” said Kanies. “A stateless wide-scale Web application might not need any orchestration at all. If you look at a three-tier application, that probably needs more ordered bits on disc. It’s something we’re spending a lot of time on.

“If you look at how we’re ordering the rest of Puppet, we’re spending a lot of time of figuring out how to get the orders we need out of people so that they get the application they want, and they get to specify what they want and what relationships they need and they do it on their own.”

Thus, the ability to configure numerous servers on the fly would provide developers the ability to rapidly provision entire test labs and to deploy entire services at once, and all without having to log into each machine and set up users and connections by hand.

All this configuration management does add another layer of assets to control, however. Both Chef and Puppet require coding to set up configuration scripts. Kevin Smith, vice president of engineering at Opscode (the company behind Chef), said that managing Chef recipes shouldn’t require a mindset change for developers.

“Obviously we use Chef, and we treat our cookbooks and recipes as being source code, the same as any of our application component sources,” he said. “It’s stored right alongside the component it’s going to deploy. As that component goes through its life cycle, the cookbook can be maintained in parallel. That’s a fairly common pattern we see out in the wild too.”

Meanwhile, a new generation of configuration-management tools is beginning to make it to market. Ansible was created by Michael DeHaan, who was the fellow in charge of Red Hat’s packaging and configuration-management system known as Cobbler. He recently founded AnsibleWorks, a startup focused on building Ansible into an enterprise-grade configuration-management tool.

Ansible’s primary benefit over Chef and Puppet is that it was designed to be easy to learn and simple to use. While this is primarily made apparent when writing configuration scripts (DeHaan claimed it can take much less time to learn how to write Ansible scripts than to write Chef cookbooks or Puppet configuration files), he said another major focus of Ansible is on service orchestration.

“I may want to talk to the Web servers or the database servers only,” said DeHaan. “One of the things Ansible is good at is rolling updates. If I want to update 15 machines at a time, Ansible can conduct these higher-level IT processes. A lot of people focus on just the configuration management. When people look at ‘How do I deploy my applications in production?’ they have to write a lot of glue on top.”

That’s glue they don’t need to write with Ansible, said DeHaan. Additionally, Ansible doesn’t require any software to be installed on the target machine, like Chef and Puppet, which require an agent to be installed. This is because Ansible uses SSH.

“It can rsync over SSH,” he said. “We’re not actually just using SSH as a way to run shell commands; we’re pushing modules and running commands, but for us OpenSSH is the most peer-reviewed component of the infrastructure. If there’s a problem, it gets fixed instantaneously.”

Salt, on the other hand, is another newcomer to the configuration-management space. Salt is more focused on instantaneous results and real-time server provisioning than its competitors.

But no matter which configuration-management, build or deploy tools you’re using, if your shop is anything like the rest of the software development world, you can bet all of those tools will be getting used more and more frequently with each passing week. Continuous integration, continuous deployment and application life-cycle management are all combining to allow developers to achieve previously unheard of velocity.

But velocity can also lead to mistakes and confusion, which is why it’s so important to use the available tools to coordinate your team. When you’re shipping code every day, you’re never more than one day away from a show-stopping bug being introduced. But with the proper tools and processes in place, removing that show-stopper shouldn’t be like trying to solve a mystery.

Containing containers
While the rest of the enterprise world continues to shift its view from servers to virtual machines, there’s a new shift in the way applications are contained coming down the pipe. Linux containers are beginning to gain traction with developers thanks to a new tool, Docker, that makes them as easy to handle as a Git repository.

Linux containers are nothing new; they’ve been around as a concept for some time now. But thanks to the combination of years of work on the Linux kernel, along with the sudden popularity of Docker, the concept has changed.

To begin with, the Linux kernel had to go through some serious overhauls in order to support Linux containers. The whole point of Linux containers is that you can package an application into a secure chunk of memory where it cannot be confused with other applications. On a single server that could accommodate maybe 25 virtual machines, you could fit as many as 100 application containers through a single virtual machine instance. Behind this, Linux is able to schedule CPU time in a fair way for all the containers on a system, preventing any one application from hogging all the processing time.

This is thanks to the Completely Fair Scheduler, which was merged into Linux in kernel 2.6.23. The CFS ensures all processes on a system get CPU time, without any tricks or traps to suck up much-needed processor attention from the applications that desperately need it.

But the Linux kernel itself is not the only software at work here. Docker is largely responsible for the sudden enthusiasm over Linux containers. Developers simply tell Docker to package up their application as a container, and it does so. Some developers call it GitHub for deploy.

While it’s still early days for Linux containers going mainstream, the sudden enthusiasm for Docker is exactly the same sort of fervor seen around Git, Hadoop and Vagrant. With that kind of excitement in the community, you can bet Docker is a project to watch.

For further information on implementing continuous integration, look here.