Containers are changing the way developers architect, build, test and deploy cloud-based applications. At the Container Summit held yesterday in New York, the convergence of Docker, microservices and OS-level virtualization was front and center.

“Containers are eating the world,” said the event’s emcee, Joyent CEO Scott Hammond, borrowing from Netscape founder Marc Andreessen’s famous declaration about software. “Everyone is using containers, learning about them, experimenting with them, and you have to wonder why?”

Hammond introduced many of the day’s themes—Docker, microservices, DevOps and the cloud—as a collection of new development models, organizations, skillsets and technologies coming together to enforce this trend of containerization, woven together by the need for speed.

“As developers, we have this love affair with containers because they allow us to simplify and accelerate the development process, and get from dev to test to runtime very fast,” he said.

As Gartner cloud infrastructure analyst Dennis Smith put it at the conference, containers are on the rise in application development, OS architecture and infrastructure management because of their relationship with the future of cloud computing. According to him, Gartner projected that by 2020 at least 70% of new application development projects will be deployed on cloud architectures, up from less than 10% today.

Tying it into containers, Smith said automation is the “secret sauce” to any successful cloud implementation, whether public or private.

“DevOps is microservices using bite-sized infrastructure components, or containers, in a highly automated fashion in terms of provisioning, managing and operating all linked into a Continuous Integration deployment system,” he said.

Docker deployment and microservices
As pronounced an impact as containers have had in such a short time (faster coding without the lengthy reboots associated with VMs, leading to performance on par with bare metal), containers still face challenges. Smith talked about the increased complexity associated with container implementations, changing the enterprise culture around monolithic applications, and the inherent security issues still apparent in Docker and the like.

For developers, his guidance is to act now. Invest in changing development and test configurations, and work to deploy containers in production—a piece of advice leading directly to a keynote from Joyent CTO Bryan Cantrill.

In deploying Docker, the biggest riddle Cantrill illustrated for both Dev and Ops is in choosing the layer on which to virtualize. While hardware virtualization can be applied more broadly than ever before, and platform-level virtualization is also an option, he explained why Joyent’s container philosophy through its SmartOS and newly released Triton platform skews toward OS-level virtualization and how it ties into microservices.

“Microservices are an embodiment of the Unix philosophy applied to distributed systems,” said Cantrill. “They do one thing, and do it well.”

(Related: Digging into microservices)

Docker is designed to be cross-platform, but as Cantrill explained, it’s really more Linux-centric. Thus Joyent has worked to leverage Docker’s Remote API to execute binaries natively with Triton. With Triton, the data center simply acts as a very large Docker host.

“With a VM model, Docker is purely a complexity additive,” he said. “What we’re doing is moving VMs aside to run Docker hosts directly on the metal. The Docker host becomes purely virtualization, and you don’t manage VMs.”

Thinking about Docker deployment in this way renders the relationship between containers and microservices even more important. John Willis, a Docker evangelist formerly of acquired software-defined networking startup SocketPlane, went further about how the two are inextricably linked.

His keynote, entitled “Guns, Germs & Microservices,” painted microservices as small autonomous services that work together, loosely coupled with containers in service-oriented architectures. Working together, he said the true power in containers and microservices is enabling Continuous Integration and Continuous Deployment in cloud-based application development.

“Continuous Integration was the killer app, and Continuous Deployment will be the [next] killer app,” said Willis.

Containers still have work to do
The rest of the Container Summit keynotes each drove home the specifics of Docker deployment in various use cases and industries. Jacob Loveless, CEO of IaaS financial infrastructure company Lucera, explained how containerized infrastructure has given stock traders and companies on Wall Street reliable performance analysis and a rollback safety net in trading data to prevent another stock market crash.

Jason Pincin, a mobile analytics engineer at Walmart Labs, framed the combination of containers and microservices in the context of how they reduced deployment complexity with an asynchronous delivery pipeline for Walmart. Pincin’s talk was followed by a “fireside chat” with Gartner’s Smith, where he interviewed Shopify Web infrastructure engineer Simon Eskildsen on containers in production.

Eskildsen highlighted many of the issues still plaguing containers (logging, accessing, and securing private files or “secrets” in a containerized environment) and the complex issues with Docker deployment around distributed orchestration.

“Don’t try to skip ahead,” he said. “Everyone is promising you this silver platter of distributed orchestration. First make your applications run in a containerized environment and solve logging.”

For Shopify, one of the world’s largest Rails applications with more than half a million lines of Ruby code, Eskildsen said he sees value in microservices. But when shifting to containerized deployment, he preached a careful focus on maintaining an application’s existing feature set.

“When you’re transitioning into immutable infrastructure, everyone expects to still see all the great things your application already has,” said Eskildsen. “Focus on retaining the current feature set and then setting up for success in the future.”

(from left) Tutum CEO Borja Burgos, Docker evangelist John Willis, InfoSiftr’s Tianon Gravi and moderator, Joyent CTO Bryan Cantrill

(From left) Tutum CEO Borja Burgos, Docker evangelist John Willis, InfoSiftr’s Tianon Gravi, and moderator and Joyent CTO Bryan Cantrill

The Container Summit wrapped up with a panel discussion between Joyent’s Cantrill and Docker’s Willis, along with Borja Burgos, CEO of Docker hosting company Tutum, and Tianon Gravi, senior vice president of operations at software consulting company InfoSiftr.

In discussing where Docker and containers need to go next, one of the biggest concerns mentioned was security.

Cantrill called current Docker security a poor imitation of what it will ultimately become, nowhere near production-grade multi-tenant Internet-ready security. As he put it, Docker is still dealing with traditional software and systems hacking, or “white hat” issues.

(Related: How Red Hat and the open-source community are fortifying Docker security)

Willis said the only way forward with containers is extensibility. He briefly explained a framework Docker is currently working on that’s focused on API extensibility, allowing any network to fit into a northbound/southbound API structure to build upon the momentum already behind the container movement.

“The biggest thing Docker did was to commoditize the use of containers,” said Willis. “You can’t underestimate what they’ve done with the file system. That’s created this amazing opportunity to use containers and images in this beautiful workflow.”

Cantrill said Docker belongs in production, but the software development industry as a whole still need to define what production means in this new containerized context.

“The Docker revolution is real,” said Cantrill. “It allows developers to express their entire ecosystem on their laptops.”