The year 2013 was a wild one for Solomon Hykes, founder and CTO of Docker. His startup, formerly known as DotCloud, wasn’t exactly the darling of Silicon Valley, and yet a simple tool his staff constructed to help adoption of their Linux container-based Platform-as-a-Service offering was gaining momentum. Docker.io, as it was called, was a way of turning an application into a Linux container, complete with all of its dependencies.

A few months into 2013, and it was apparent to Hykes and his team that Docker.io was the most important thing they were working on. DotCloud investor Dan Scholnick of Trinity Ventures remembered a board meeting in the first quarter of 2013 where Hykes asked the board if he could open-source Docker.io. The board—Scholnick included—was hesitant.

But Hykes went ahead and made the project’s code open, and the move expanded its community to the point where the DotCloud moniker and PaaS gameplan were thrown in the trash by October. Today, his company is called Docker, and it’s changing the way developers and IT deploy applications at scale.

(Related: What’s in the Docker 0.7)

John Rymer, vice president and principal analyst at Forrester Research, explained why Docker is appealing. “One of the benefits of Docker is that it’s almost like a generic version of Heroku. You have this container, and it can use virtual machines very intelligently. You can actually expand virtual machines under the covers. This is a more generic approach that could potentially be very, very useful. Portability is useful. A lot of what runs in clouds, and a lot of what people bring to clouds you could just stick into a container. You don’t have a dependency on Amazon or Azure or whatever. It’s very easy to move your code back and forth.”

Rymer also said, however, that Docker as a technology is making headway. “First, you have to prove you have something that works,” he said. “I think they’re just coming out of that stage. And hardly anybody is asking me about Docker. They’ve got a lot of buzz now among the vendors, but people don’t ask me about Linux containers.”

Hykes is hoping to change that in 2014. Scholnick is too. They said that Docker has big plans for monetization this year. These include a GitHub-like method of sharing containers for more generic usage, and offering a for-pay repository for building containers from existing artifacts. Docker will also continue to offer its Linux container-based PaaS.

Not the first

When Hykes created DotCloud in 2007, the focus was on PaaS, but first they needed the containers. “The sound bite you’ll hear a lot is that DotCloud was a PaaS and pivoted to open-source container technology, and that’s Docker,” he said.

“This is all true, but actually before we built the PaaS, we spent two years building a first version of Docker. What was known as DotCloud in 2008 to 2010—those two years prior to launching any PaaS—all we did was develop open-source container technology.”

But that first attempt to build Docker didn’t work out, primarily, said Hykes, because Linux containers weren’t quite ready at the time. “The original intent of DotCloud was to develop a standardized deployment platform that could allow you to package an application into a standard container and deploy that container in any machine and move it seamlessly around that portable foundation. We failed in that we never convinced enough people to use it. We pivoted to applying that technology to one particular product: the PaaS we launched in 2010 and still operate today.”

But over the next three yeas, the Linux kernel caught up to Hykes’ vision of a container-based cloud. “The low-level technology has existed for awhile, but it’s only recently that all the pieces really came together in a form that could be leveraged in the Linux kernel,” he said.

“You could previously modify the kernel to do sandboxing namespacing for a while, but it required patching. That was fixed gradually but, starting in 2011 and 2012, that’s when it became possible to do proper sandboxing namespacing in the Linux kernel without patching. Sandboxing is one key low-level capability required for a container engine. You could do something like Docker before that, but it came with the caveat of only working on past versions of the kernel. That is one of the reasons our initial try of this never took off.”

Now that containers are fully supported in the standard kernel, instead of being something only available to users who’ve independently patched their kernels with experimental code, Docker can flourish, said Hykes.

And Docker has taken off, though, the focus for Hykes is on improving the user experience through better services, and on making sure the Docker community is not controlled exclusively by Docker.

“We want Docker to be more than that blob of code we throw over the wall every month. We are giving up a lot of control and visibility. We have core contributors who do not work at the company. We’re removing all those roadblocks. Part of that is through hiring more people and developing more tools. A part of it is improving the hosted service infrastructure that runs 24×7 and improves Docker.

“For Docker, the open-source project to be better, the hosted service that supports it needs to also get better. You could summarize that as GitHub for Docker, but we want a place for people to store images and to track their images across many machines.”

Hykes also added that Docker’s future is lined with partners. He said Docker doesn’t want to replace best-of-breed management and monitoring tools, but rather wants to integrate with those tools in all categories.

He also added that the emergence of SOA 2.0 is a perfect environment for Linux containers. When you’re developing a number of loosely coupled services that work together, but live independently in their own development cycles, Linux containers make perfect sense, he said.

“The best way to deploy service orientation is containers,” said Hykes.