While Linux containers have existed for more than 10 years now, their usefulness and safety hadn’t really made inroads with enterprise developers. Monday, however, marked a clear turning point in the evolution of Linux containers, with the release of Docker 1.0, the first version of the container software to be declared enterprise-ready.
At DockerCon, Docker (the company) announced its first fleet of services alongside the actual 1.0 release of the Linux container controller software, also known as Docker. Services include typical enterprise support and training options, and also cloud-based deployment support via Docker’s hosted container service.
Docker Hub also launched at the conference. Ben Golub, CEO of Docker, said that Docker Hub is “a hosted service geared toward developers. It gives them tools around content, collaboration and workflow, including access to 14,000 Docker-ized applications, Web hooks, and private registries. It’s open and free to anyone to use; we charge for private registries, which also comes with the opening of a set of curated content.”
(Related: Our prediction of Docker’s impact)
Docker Hub aims to give developers an easier on-ramp to building Docker containers, which can present a shift in mindset for engineers who are used to deploying in more traditional tools. But while Docker the company was throwing itself an enterprise coming-out party in San Francisco, this week’s ancillary announcements were more indicative of the broader levels of support Docker is gaining on all fronts.
This was evidenced by the fact that Red Hat, also at DockerCon, announced the public availability of Red Hat Enterprise Linux 7, which now includes Docker. This edition of the standard enterprise Linux distribution has added XFS as the default file system and shifted to systemd to orchestrate system functions. But it was the Docker news that showed just how closely the two companies are now working.
Other big names in enterprise software at the show included IBM Fellow Jerry Cuomo, Rackspace CTO John Engates, Red Hat executive vice president and CTO Brian Stevens, and Google’s vice president of infrastructure Eric Brewer.
Brewer even announced the release of Kubernetes, a new open-source project that manages pods of Docker containers, allowing multiple instances of an application or service to be managed as a single unit with automated data-center migration and failover.
Docker’s Golub said that the Docker ecosystem is growing at a phenomenal rate. “We’ve been amazed at how this ecosystem has taken off. People are using us at a speed we never anticipated. You can get a sense through the list of speakers. It’s a nice healthy mix of customers using Docker, as well as folks like Chef, Puppet, Salt and Ansible, Red Hat, and academics,” he said.
Indeed, many attendees were already experimenting with Docker, even before the 1.0 release. Josiah Kiehl, software engineer at Riot Games (the maker of League of Legends), said that his team has been testing its Docker deployments at larger and larger scale to prepare for full-scale use.
“Docker is a shift in paradigm from the way a lot of people do configuration management these days,” he said. “There’s been an oscillation from immutable to mutable infrastructure. A long time ago, the only way people did configuration management was they build an entire machine, then snapshotted it, then shipped that 5GB image out around the world. Then Chef, Puppet and CFEngine said that shipping images isn’t viable, so they introduced this concept of mutable infrastructure that can make small changes in the box without having to redeploy the whole image. That was the answer to the problem.
“Now, we’ve got little infrastructure, and any change in deployment can change things elsewhere. Docker says, ‘Let’s go back to the immutable deployment, and we don’t have to ship huge images.’ Everybody sort of accepted their fate by using Puppet and Chef, and then dealt with the complexity of mutable infrastructure. With Docker, you don’t have to do that.”
Matt Ray, who works on Chef at Opscode, said that Docker is not a replacement for Chef, however. Instead, it allows for both mutable and immutable infrastructure policies to exist at the same time.
“Our user base tends to be very aggressive on adopting bleeding-edge stuff, so obviously we needed to have good Docker support,” he said. “Every third or fourth talk here, people have said, ‘I kicked this off with Docker and then configured it with Chef.’
“We have a project called Chef Container. It bundles up your Chef managed infrastructure and pushes it into a Docker container. One of the keys to Chef Container is it has a Chef resource called Container Service that intercepts all of the calls to manage services that a virtual machine would do for itself. The Container Service hands it up to [the system]. As a result, the things you write for regular virtual machines, you can port to Docker with three lines of code.
“We can talk about moving stuff in and out of Docker, into the cloud, or onto your desktop. We’re trying to give people complete flexibility with how they treat this stuff, without treating Docker as virtualization. People are going to treat it as either lightweight virtual machines, or as service virtualization.”
Chef aims to support both use cases, Ray added.