As companies progress on their digital adoption journey, they continue to invest in the next wave of modern application and deployment platforms, with containers by far the most high profile of these technologies. They have become instrumental in driving digital transformation within the enterprise, as they offer the kind of flexibility and portability needed to maintain an edge in today’s fast-moving competitive environment.

In fact, recent joint research from Red Hat and Bain & Company uncovered that enterprises using containers are beginning to realize material architectural benefits. According to the report, initial container adopters could realize:

  • A 15% to 30% reduction in development times, and additional infrastructure flexibility gains driven by the portability benefits of containers.
  • Cost savings of 5% to 15% from hardware productivity.

Red Hat and Bain expect container adoption to grow across all app life-cycle phases, especially the production phase. But as IT and business pros continue to evaluate container technology, many are feeling overwhelmed by the amount of often-confusing information out there.

In the end, it’s actually simple: Containers are just fancy files and fancy processes. So, how exactly do they work?

First let’s go back to something that all of us understand from using our computers and smartphones: the program. But, what is a program anyway? Well, that depends on what it’s doing. When a program is first installed on your computer or smartphone, it’s really just a file. When you start the program, it’s loaded into the memory of the device and the operating system allocates CPU to execute it. The operating system also serves as a traffic cop, deciding whether a program can access a file or connect to the network. The operating system, and more specifically the operating system kernel, are key to running programs.

The same is true with containers: They can be started, stopped and moved around just like normal Linux programs, but much more quickly and easily.

Like normal programs, containers really have two different states: running and not running. When a container isn’t running, it’s really just a set of files grouped together in a bundle called a container image. This container image is really just a “fancy file” that has other files in it. When a container is started, the container runtime unpacks the files in the container image and hands them to the operating system. The operating system is then responsible for running the container and connecting it to a copy of the files from the container image. The operating system (more specifically the kernel) also limits how much CPU and memory can be used. So, containers are just fancy files and fancy processes handled by the operating system in a slightly different way than regular programs.

Which leads to the final piece of technology we will discuss: the registry server. The registry server is really just a fancy file server that knows how to store these container images so that users can share and collaborate when they are building them.

Now, let’s talk about the format of these fancy files, because it’s important to your technology adoption. The Docker project image format has become very popular—so popular that the industry has created a standard called the Open Container Initiative (OCI). So, when we talk about these fancy files, we are really talking about standard OCI container images.

Having a standard image format guarantees portability between registry servers. This allows end users to focus on building and sharing their work, not worrying about compatibility problems. The OCI standard is becoming very popular because it provides an easy, standards-based way for users and vendors alike to build and share container images. OCI-compatible registry servers can be deployed on premise, in development environments and in the cloud, which makes it really easy to move applications between environments.

So, if it is really this simple, why all the confusion?

Well, it’s partially because of how fast the industry is moving with containers. There is so much great work happening in the open-source community, which makes it hard to keep up. It’s also because many companies are exploiting the rising interest in containers by applying the term too widely, and therefore creating confusion.

While these fancy files and fancy processes are a simplified way of looking at containers, it’s one that will help business and IT managers understand how containers are (and aren’t) similar to technologies already in use. It also provides a clarified understanding of the technology.

At the end of the day, containerization allows you to focus on your application. Containers give you a simplified way of packaging everything your application needs in a standardized container image. This includes the language runtimes and all of the dependent libraries. Beyond this, it is an operating system process that runs in a specific environment with some resource management around it. Containerization is completely an operating system technology using files and processes; it isn’t recreating the operating system, nor is it replacing it. Containers instead extend the operating system.

(Related: Docker now keeps “secrets”)

So, how does this actually benefit my organization?

Having an ecosystem of standards-based container images, registry servers, and container hosts to run these fancy processes is completely changing the way customers build and deploy applications. Whether they are homegrown or off-the-shelf, the end user gains a lot of flexibility and efficiency. From architects researching potentially useful software for their organizations to developers trying to get code into production, this container infrastructure speeds up overall productivity and lowers frustration. This in turn leads to faster deployment and more consistent delivery, both of which are key components to having more satisfied consumers.