The rapid rise of virtualization containers in the application world cements their growing relevance to production IT managers. Although similar in approach to virtual machines, containers have a few core differences that prompt IT departments to review their security approaches.
Increasingly, IT is being forced to change its ways from implementing constraining full-stack VM models. An organization runs on processes, and these processes need to be flexible. With a VM, if the process changes, then the whole VM has to change.
Whereas VMs typically underpin historical monolithic application models, containers can provide better support for composite applications.
A VM contains absolutely everything required for a working application. This includes the application, its database, analytics, as well as a virtualized version of the hardware, BIOS and every service needed for the application to function. Therefore, each VM is a complete system, and the approach to security is essentially the same as if the VM were a physical system. The downside is that, since it is a complete stack, starting up a VM means that the BIOS has to be started, followed by the OS, followed by all the rest of the software stack. While running it against a virtualized set of resources is fast, it is not instantaneous, unless hot -- preprovisioned and implemented -- VM images are used, which wastes resources. Virtualization containers strip a lot of these issues away.
A container has greater dependencies on the outside world, not from within the container itself. Although it still creates abstraction from the physical level, virtualization containers share most of their resources in a dynamic manner, and a container accesses the majority of its devices via the base platform. Containerization is more of an application approach that presumes that the BIOS and OS are already running, unlike VMs. It creates a loose sandbox environment of the shared resources on which each container then layers the function, service or application.
In virtualization containers, an application can be broken down into sets of containerized functions that are pulled together practically on the fly, creating a system that supports a process in a far more flexible manner. A composite application can pull together a container that is set up and optimized just to run analytics, a specific database or application logic. Containers are better at managing this service chaining than VMs, since they are designed to work against a shared platform and use shared functions and devices far more effectively than VMs.
More flexibility means more security considerations
Although this use of containers increases overall flexibility, it introduces extra management requirements -- and extra security issues. With a VM, everything is in one place -- the VM can be managed and secured as a single entity; with a composite, container-based application, there are a series of functions that are more loosely strung together that each need management and securing.
If the security of the base platform is compromised, so too is the security of all the containers on it. This can work in both directions. Let's assume that a container image has full super-user privileges. As the container has to talk back to the underlying platform on a continuous basis, an intruder who has accessed the container could, theoretically, break through from the containerized environment to the underlying platform and retain those privileges.
Although this sounds like a pretty bad design flaw, it is the only way that containers can work, and is how they provide the density improvements over VMs that have garnered a great deal of interest. The key for production IT's adoption of containers is to ensure that not only are containers secure within themselves, but also that the approach to creating the containerized environments is secure.
Securing virtualization containers must rank high on the priority list for any group using them. Where possible, isolate them through the use of their own namespace. Provide virtualized containers with their own network stack, avoiding any privileged access across different containers to physical ports. Use control groups to manage resource allocation and usage -- this enhances external security, as it helps in managing distributed denial of service attacks.
Tend to the contents of virtualization containers
Only use enhanced privileges where absolutely necessary with virtualized containers, and drop those privileges back to standard as soon as possible. Never leave privileges set as enhanced, just in case; the slight increase in internal latency required to set and reset enhanced privileges is worth it for the sake of greater overall security on the IT platform.
Try to run services as non-root: If root must be used, be as careful here as you would be in the physical IT infrastructure. The container is not a sandbox -- it has holes all over it. Developers are not coding in an airlocked space; whatever is coded badly in one container can have major security effects on all other containers run on the same physical platform. The vast majority of containers do not need root privileges. Most services that require root privileges should already be running outside of the container as part of the underlying platform. Running with reduced privileges enables the container to deny any mount requests, deny file creation or attribute change activities, prevent module loading and otherwise protect the system.
The one major area in the underlying platform to treat differently than conventional virtualization when it comes to security is the core functional part of virtualization containers. With Docker, the Docker daemon runs with full privileges in a physical root environment to create a virtualized container environment. Anyone with access to the daemon has complete freedom to do what they want. Therefore, control access to the daemon through dedicated sys admin controls.
Virtualized container security also means watching how the containers use application programming interfaces (APIs). It only takes small errors in how an API call is made to allow a malicious attack to load a new container or change the contents of an existing one to access the root environment with high privileges. When running containers on enterprise systems, invest in API monitoring and management tools, such as those from Akana, Apigee or CA Technologies.
How virtualized containers change server requirements
Containers vs. VMs? Not so fast
Can containers coincide with server virtualization?