Businesses want to move faster, develop more software, and deploy software and updates more often, but to do this in a traditional software architecture is a lot to put on developers. In order to ease the pain, more businesses and developers are turning to containers.
A software container is a way to package software in order for it to run anywhere regardless of the environment. “Everything comes back to being faster and being cheaper than the competition from a core business standpoint. How can you deliver software faster, and how can you make sure you can deliver it in a way that is more cost effective than other competitors in your market,” said Mackenzie Burnett, product lead for Tectonic at CoreOS, a container orchestration platform provider. “What containers have enabled is both an organizational speed in terms of how you deliver software and how you develop software. On the other hand it allows for significant cost savings.”
Containers are not a new phenomenon, but it wasn’t until recently they were made easily accessible to developers. In 2013, the software container platform provider Docker announced a framework that made container technology portable, flexible and easy to deploy. “When Docker started, the focus for Solomon Hykes, founder and head of all technology and product for Docker, was on two areas: The democratization of [containers] and the democratization of the container technology for developers,” said David Messina, SVP of marketing and community at Docker. What Hykes was able to do was separate the application concerns from the infrastructure concerns and make container technology accessible to developers, he explained.
Before Docker, containers were not accessible to developers. “It was actually an obscure Linux stack technology used by operations folks for isolation,” said Messina. The first generation of containers, also known as system containers, were primarily focused on virtualizing the operating system, according to Arun Chandrasekaran, research vice president of storage, cloud and BigData at Gartner. “What Docker really did was ride on the coattails of past innovations and past work and provide a very simple application interface to system containers,” he said.
Today, the interest in containers has become widely popular. According to Gartner, client inquiries show a 300% increase in containers in 2016.
Another reason for this surge in containers is what Gartner calls the digital business. According to Chandrasekaran, more and more businesses are becoming software companies, and they are under more pressure to do continuous software delivery.
“People want to go faster. The whole idea of ‘software is eating the world’ is businesses outside of Silicon Valley need to realize they can be disrupted by teams that adopt new technologies and build applications that can be changed as quickly as customers require changes to be made,” said Alexis Richardson, CEO of Weaveworks, container and microservices networking solution provider.
Docker donates core components of its technology to the industry
In order to help the industry benefit from its technology, and create innovative container solutions, Docker has donated components and ingredients of its platform to open-source foundations. In 2014, Docker introduced the libcontainer, now known as runC, a built-in execution driver for accessing container APIs without any dependencies. The specification and runtime code was donated to the Open Container Initiative in 2015 in order to help create open industry containers for container formats and runtime.
In 2016, the company’s containerd runtime was released as a standalone, open-source project. Just last month the company announced its intent to donate the runtime to the Colud Native Computing Foundation (CNCF). According to the company, the runtime and organization’s goals align in terms of advancing cloud native technology and providing a common set of container technology. Docker will continue to invest and contribute to the project. The company is currently working on implementing the containerd 1.0 roadmap, with a target date of June 2017.
“Containerd is at the heart of Docker. We need the project, and we need it to be successful,” said Patrick Chanezon, member of Docker’s technical staff. “Giving it to the CNCF will just expand the community that can collaborate on it.”
How to take advantage of a container architecture
Containers are often associated with microservices, a software approach where developers break applications down into small, independent components instead of dealing with one large monolithic application.
“Container technology is an excellent way to implement microservices because what microservices does in a nutshell is allow you to break up your monolithic applications into a set of independent services. Each service can then be deployed, upgraded, maintained and bug- fixed on its own without having to impact the whole application,” said Sheng Liang, CEO of Rancher Labs, a container management platform provider. “Without containers, businesses have to worry about the different environments software has to be deployed in, and packaging the application then becomes a very labor-intensive and time-consuming process.”
According to Liang, because microservices need to be individually packaged, deployed, scaled and upgraded, containers are a nice fit because of the lightweight architecture. It enables continuous deployment, continuous integration, and can cut the build and development time down to minutes.
“More than a revolutionizing approach to software development, containers and microservices enable greater app agility, reliability, portability, efficiency and innovation. Moving away from monolithic app architecture to a distributed microservices-based architecture often leveraging containers means that developers can quickly introduce new features without impacting application functionality and maintaining availability at scale,” said Corey Sanders, director of compute for Azure at Microsoft.
Containers enable agility because they enable developers to build an application one time, run it on any infrastructure, and package everything together in an easily shareable way. In deployment, containers provide a shorter testing cycle by packaging all the dependencies, and enabling consistent deployments, according to CoreOS’s Burnett.
If you are going to do microservices, there is really no reason not to use containers, according to Rancher Labs’ Liang. However, not all applications are going to be ready for a microservices architecture. “Even if you have a monolithic application, there are still a lot of benefits to using a container because the fundamental benefits are universal packaging and a deployment format that provides consistent runtime,” he said.
Burnett explained that the difference between the architectures is that if you have one giant monolithic application, with myriad dependencies, you typically will have a giant team working on it. With a microservices architecture, you have smaller teams working on separate services that don’t have the same tightly coupled dependencies on one another, he said.
“In either case, the container is the package for the thing you’re replicating. If your architecture is monolithic, you’re going to have a few big clunky boxes to replicate. If your architecture is made of microservices, you’ll have a lot of small boxes that you can replicate independently of each other. Most enterprises have architectures that are a mix of the two, monolithic and microservices,” she said.
However, Microsoft’s Sanders explained that scaling with monolithic applications can be problematic because developers need to deploy more application instances, create new virtual machines or provision new servers. “When combined with testing to ensure that the system works as expected after the changes, scaling monolithic applications can be time-consuming and expensive. This complexity can be exacerbated further when there are resiliency requirements, which is often the case with enterprise applications,” Sanders said.
A microservices architecture is designed to scale independently, providing agile scaling at any point in time, Sanders explained.
And then there are the situations where applications may not be suitable for containers or microservices at all. “You can’t just take something that is built one way, and change it. Not everything needs to be containerized. It is just a part of what architecture decision is best for your business,” said Burnett.
How do containers differ from virtualization?
If the idea of taking things from an application and isolating them sounds an awful lot like virtualization, that is because it is, according to Betty Junod, director of products at Docker. Junod said that conceptually, virtual machines (VM) and containers are similar, but architecturally they are different.
“If you think about VMs, those are effectively machine instances that were set up by operations to effectively allocate memory resources whereas the packaging that we are talking about here with Docker and containers is in the hands of the developers, and it can run on any infrastructure,” Docker’s Messina added.
In a sense, containers are a lighter weight VM. They are an application packaging format that doesn’t require developers to package the operating system in as well as a VM would, according to CoreOS’s Burnett. “What this means is coupled with container orchestration platforms such as Kubernetes you can pack servers in a much better way,” she explained. “A way to think about it is in terms of Tetris. If you aren’t paying attention to what you are doing once you get to the top, you run out of space. If you pay attention, you have to pack Tetris or the pieces much more efficiently [to] effectively use the space,” she said.
However, the real key differences between containers and virtualization are that virtualization typically has been bound to a infrastructure provider, and virtualization up until recently has been expensive and too difficult to make real applications out of components, according to Weaveworks’ Richardson. “Containers are very quick to start, and very lightweight in terms of their capacity consumption requirements,” he said. “There is a possibility that you could build much more realistic applications using containers and get some of the benefits of VM at the same time.”
The three key benefits that make containers more appealing over VM’s include their ability to run on bare-metal infrastructure; their smaller resource footprint; and the ability to bundle application dependencies, according to a recent study from Gartner’s Chandrasekaran, and Raj Bala, research director at Gartner.
Approaching containerization
There are three entry points to adopting containers, according to Docker’s Junod. They include:
- Taking an existing application, containerizing the whole thing, and slowly starting to carve pieces off for modernization
- Taking commercial off-the-shelf applications that are already in-house and containerizing them to be more portable
- Starting with a new new application
However an organization decides to approach containers, there are some best practices that can help them along the way.
Traditionally a lot of technology adoption requires big top-down initiatives, but containers have a very different process in terms of how organizations typically adopt them, according to Rancher Labs’ Liang. Container adoption tends to start with developers very organically because the benefits are very tangible and simple to use. In addition, businesses don’t have to turn every single application into a container on day one. You can start with one, and eventually migrate everything over. Some applications may be working just fine and not updated very often, so a company can stay with a legacy infrastructure and not implement a container deployment model, Liang explains. “In general, there is a lot of flexibility and freedom in how an organization can adoption container technology,” he said.
According to Microsoft’s Sanders, container-based and microservice architectures take a lot of planning. The first thing business leaders need to do is prioritize their application and services, and figure out which ones are most important to their daily operations. “Applications requiring high availability with fast agile development can benefit most from these new models. Depending on the business goals and time horizons, enterprises can choose from many ways in which to transition to a these modern architectures,” Sanders says.
CoreOS’ Burnett recommends having a small team within the organization to lead the transition. The team starts playing around with the technology, evaluating the technology platform, and acts as a prototype for the rest of the company. “The prototyping does not just include the technology. The team is also prototyping how to build a team, the best practices for training people on the new technology, and how to communicate between teams,” he said.
In order to start using containers right away, Sanders believes a lift-and-shift approach to existing apps may be the best solution.
A lift-and-shift approach allows developers to port applications without having to refactor or deal with extensive code modifications, according to Weaveworks’ Richardson. For example, a lift-and-shift of a small legacy app allows developers to move it to the cloud, make it redundant, and create a sleeping copy so that it has a backup in case the primary app is overloaded, he explained
For teams trying to take the full advantages of a microservices architecture, the best way enterprises go about this is to fully re-architect their applications, according to Sanders. “This development mechanism lends itself well to the distributed, resilient, and agile nature of a microservices-based application,” said Sanders. In order to successfully re-architect an application, Sanders suggests developers take a gradual approach, identify the components that benefit most from cloud scale and agility in deployment, and rebuild those components first.
“Whether you choose to adopt containers and microservices through a legacy migration, lift-and-shift, a re-architecture, or greenfield, it is always going to come down to the question of how do you make this easy for application developers,” said Richardson.
Gartner’s Chandrasekaran explains sometimes the amount of effort required to retool or refactor a legacy application might not compare to the benefits a company could potentially be getting with containers and microservices. “Organizations have to have a very clear idea of their portfolio and figure out which applications can benefit from the tradition. Secondly, they have to identify what are the metrics and how are they going to measure the status of these projects to figure out if it has been a success initiative.”
One of the biggest challenges organizations will run across is the cultural transition. Containers are relatively new to developers, and the skill sets aren’t all there, according to Chandrasekaran.
“If you really want this to be successful, you have to have a more fluid organization where people are collaborating increasingly with each other, trying to do new things, trying to in some sense break things and willing to learn from those things,” he said “A lot of this movement is going to really come from the willingness of organizations to relook at their skills, relook at their processes and more importantly relook at the culture and leadership, and how they reward and hire people.”
The container toolbelt
Containers are a way to easily package your software, but there is still the matter of existing server, storage, networking and security infrastructure a business needs to consider.
As leaders look to create a container strategy, they need to address operations management, application software installation, infrastructure software installation and management, and physical and virtual server installation. Each one of those different pieces require different tools and approaches to have a successful container transition, according to Gartner’s Chandrasekaran.
Operation management includes scheduling, resource management and monitoring. “Scheduling and resource management are key, as containers allow denser packing of hardware resources than virtualization,” said Chandrasekaran. He recommends looking at tools such as Google’s Kubernetes, Docker Swarm, Apache Mesos and Mesosphere Datacenter.
Since containerization is such a new technology and skill for everyone, CoreOS’s Burnett says it is best to look toward a solution that has an established operationalized knowledge base on how to run containers in production like Kubernetes.
According to Weaveworks’ Richardson, orchestration provides an easy way to discover and maintain containers that you wouldn’t be able to do manually. “You don’t want to be looking at hundreds of machines, or even tens of machines and have to worry about what software is deployed on which one,” he said.
In addition, Chandrasekaran says granular monitoring tools that handle container-level monitoring will help developers identify bottlenecks and failures, as well as pinpoint problems. Scheduling and orchestration will allow users to scale containers and have them interoperate with other parts of the infrastructure.
Application software installation includes activities associated with installing the app software within the containers. According to Chandrasekaran, it is important to maintain the registries that store the software and ensure developers are using the right software. “Without this governance, developers are free to use any application or application infrastructure. Among the enterprise-hosted offerings in this area are solutions from Docker and CoreOS,” he said.
Service management includes activities involving the development and operations of the service such as container runtime and container discovery. Here, traditional operating systems and container formats are used with an operations management process, Chandrasekaran explained. According to Richardson, an operational platform helps complement other solutions because it provides the ability to troubleshoot, diagnose, issues and correlate them with results.
Infrastructure software installation and management includes infrastructure provisioning, configuration and patching functions. “This includes the installation of the underlying operating system that is virtualized to make containers. After installation, the configuring and ongoing patching of the operating system must be performed,” Chandrasekaran said. Chandrasekaran believes users need a continuous configuration automation process to work with containers.
Physical and virtual server installation is provisioning the infrastructure where containers reside. According to Chandrasekaran, enterprises are deploying containers within VMs because of their ability to separate individual containers, and the mature tooling found in the VM world. Over time, however, Chandrasekaran sees more companies taking an interest in developing new container-related tools that are in line with VM management. Serverless technology is an area Microsoft’s Sanders believes is growing. According to him, it allows developers to focus on developing applications, not managing machines or worrying about virtual machines, and in turn boosts productivity.
“The world of microservices and containers is evolving rapidly. There are multiple popular offerings for container orchestration and management. We see this diversity continuing as customer needs continue to diversify,” Sanders said.
Other necessities for containerization include having proper governance and security policies in order to prevent things like malicious code from coming in. Chandrasekaran recommends trusted registries to help monitor container traffic. In addition, Rancher Labs’ Liang says businesses need to implement internal processes to prevent the operations team from looking at things they aren’t supposed to, such as customer data. “Breaches don’t just come from the outside, the come from within the organization too. You want to make sure your security and privacy concerns are solved,” he said.
For networking and storage, you need to have a back-end infrastructure that is agile-oriented, and allows for a more automated process. Gartner’s Chandrasekaran is seeing more people interested in cloud infrastructure because it lessens the pain that comes with hardware management, and allows users to quickly provision and scale infrastructure.
Liang believes cloud infrastructure is important because if you have a system running on a couple of servers in your own data center and you have a bad network connection, it is not going to scale. The cloud can help ensure teams store data reliability, move data from one host to another, handle load balancing problems and solve networking and storage problems.
Additionally, Docker’s Messia believes teams need to have an overall management platform that covers the container lifecycle from developers to operations, and allows Dev and Ops to collaborate.
“Container technology is no longer playing around. It is for real,” said Weaveworks’ Richardson. It is becoming easier and easier for application developers to use this with their favorite tool. 2017 is the year they should start doing it, if they haven’t already.”