Software containers are indisputably on the rise. Developers looking to build more efficient applications and quickly bring them to market love the flexibility that containers provide when building cloud-native applications. Enterprises also benefit from productivity gains and cost reductions, thanks to the improved resource utilization containers provide. Some criticize containers as being less secure than deployments on virtual machines (VMs); but with the proper implementation, containers can deliver a more secure environment. Security on the Internet is a complex problem, but we’re developing the tools and processes needed to solve it.

Additionally, containers and VMs aren’t an either-or proposition. It’s possible to deploy containers onto VMs if that’s what you choose to do, or use technologies like Intel’s Clear Containers or the open-source Hyper to achieve the best of both worlds: The isolation of a VM with the flexibility of a container.

Containers and distributed systems provide a level of development flexibility and speed that outpaces traditional processes, making late adoption a handicap to competitiveness. Once you decide to migrate to container deployments, make sure you take the appropriate steps to protect your infrastructure.

Here are 5 best practices to secure your distributed systems:

  1. Use a lightweight Linux operating system
    A lightweight OS, along with other benefits, reduces the surface area vulnerable to attack. It also makes applying updates a lot easier, as the OS updates are decoupled from the application dependencies, and take less time to reboot after an update.
  2. Keep all images up to date
    Keeping all images up to date ensures they’re patched against the latest exploits. The best way to achieve this is to use a centralized repository to help with versioning. By tagging each container with a version number, updates are easier to manage. The containers themselves also hold their own dependencies which need to be maintained.
  3. Automate security updates
    Automated updates ensure that patches are quickly applied to your infrastructure, minimizing the time between publishing the patch and applying it to production. Decoupled containers can be updated independently from each other, and can be migrated to another host if the host OS needs to be updated. This helps remove concern about infrastructure security updates affecting other parts of your stack.
  4. Scan container images for potential defects
    There are lots of tools available to help do this. These tools compare container manifests with lists of known vulnerabilities and alert you when they either detect old vulnerabilities that might affect your container on startup or when a new vulnerability is discovered that would affect your running containers.
  5. Don’t run extraneous network-facing services in containers
    It’s considered best practice to not run Secure Shell (SSH) in containers – orchestration APIs typically have better access controls for container access. A good rule of thumb is if you don’t expect to perform routine maintenance tasks on individual containers, don’t allow any log-in access at all. It is also a good idea to design your containers for a shorter life than you plan for VMs, which ensures each new lifecycle can take advantage of updated security.

Container security will continue to evolve. By following the five best practices outlined in this article, I hope to help dispel the myth that containers are not secure and help enterprises take advantage of the productivity gains they provide while ensuring they are as secure as they can be today.