We don’t often see technologies arise that are clearly the future of software development. Git was one such piece of software. Hadoop another. But this year, we’re betting on Docker. It just makes sense in every direction. Virtual machines require care and maintenance, and can hog up system resources in unforeseen ways when applications get disk intensive or require frequent reboots.
Those problems vanish with Docker simply because it leverages the existing capabilities of the Linux kernel. These are the types of capabilities that once resided only in Solaris or Multics, but which have finally come home to roost within the Linux kernel.
This all means that Linux containers are now safe for daily use in a production environment. And with that comes the possibility of packing thousands of small apps into a single server, under a single kernel, without any VMware or Citrix licenses needed. In fact, it’s only Red Hat that really profits from the move to Linux containers. Well, Red Hat and Docker.
And developers! The deployment path for Linux containers is just simpler and easier to maintain at scale (in theory) than the current model of deploying entire operating systems tied into each application. The day-to-day drudgery of keeping virtual machine OS images up to date is destroyed by Docker, and good riddance.
In a time when applications must be reworked and redeployed almost daily, it’s refreshing to see that there is light at the end of this deployment nightmare tunnel, and it doesn’t require everyone to write to some crazy framework, or to push all of their models into some form of PaaS. Instead, it requires developers to build in Linux, and to uncouple their applications from each other.