pixstock - Fotolia

The economics of going all-in on container virtualization

Theoretically, containers can improve consolidation levels, but is there really a tangible benefit to replacing VMs with containers?

For IT staff just getting used to virtualization and hypervisor-based clouds, the concept of container virtualization is a major sea change in the way they do business. Acceptance of a new technology has to be predicated on a strong value proposition, and containers may be up to the task.

The container concept is really quite subtle and, in hindsight, pretty obvious. A fault in traditional virtualization stems from its desire to be ultra-flexible. Each virtual machine runs a complete system image, including the operating system and utilities, with the result that perhaps 60% of its memory is consumed before the application and data are considered. The aim, though, is to allow a VM to run any OS, from Windows to Linux.

Enter containers

With hundreds of VMs running the same operating system, literally terabytes of memory are devoted to duplicate copies of the OS. Why not limit a server to only a single OS? Then, one single image of that OS can service all of the workloads or containers on that server. We lose some flexibility to mix and match OS versions, but frankly, in many cases, it doesn't matter.

The economic impact of this is tremendous. First, each server can run more workloads and do it more efficiently. How much more efficient is somewhat configuration-dependent, of course, but a good rule of thumb is that the same hardware should be able to support two to three times the number of containers as VMs.

Each container gets perhaps half the time allotment of the VM, but that equation has to be adjusted for more efficient memory operation, reductions in state-swapping time and other factors. The fact that containers get a smaller time slice may not matter when the task load is light.

Reality sets in when we look at I/O. Let's face it, most VMs are I/O starved. Each VM gets the equivalent performance of a floppy disk. A large component of the I/O is actually OS traffic. This is especially true during boot operations.

With container virtualization, having one OS image should reduce traffic. This offsets the dilution impact of distributing the available I/O over more instances. However, in use cases that generate a lot of I/O traffic, each container will be even more starved.

At this point, the rubber meets the road. We have the option of hypervisor-based VMs within a server or we can more than double the instance count with containers -- run each slower, but still halve the size of our server farm. One alternative is to keep the container count the same as the VMs and make the containers larger, so they can use more memory.

This isn't ideal, since we'd really like to scale the I/O rate, as well as the memory. This might force us to up our game on networking, for instance, with two 10 Gigabit Ethernet links instead of one. This is still more economically sound than buying larger servers under the hypervisor model.

Putting a configuration together based on faster networking, with some fast solid-state drives (SSDs) to help improve IOPS even further, and the economics make container virtualization irresistible. Essentially, we can double the size -- measured in delivered workloads -- of our cluster without buying any additional servers. We may need to invest some more on faster networks -- though not always -- and perhaps a few SSDs, but overall savings in capital expenses are tremendous. There are also operating expense savings to consider. Less gear means less power, cooling and support staff.

Are there technical showstoppers?

Containers are still evolving, which means some needed features are immature. That’s a short-term problem. System support to aid multi-tenancy is also evolving and is an issue you should be comfortable with before jumping in. Longer term -- say three years -- we’ll see compressed DRAM and near-line flash replacements that may well work better with the container virtualization model.

Proponents of containers talk about increased agility and faster app deployment, and also consider the ability to install containers without the large license fees of traditional hypervisor vendors. If these things come to pass, they add to an already strong economic story.

Any way you look at it, containers win. The dust, uncertainty and hype are settling, while both server and hypervisor vendors cringe a bit at the revenue implications, but the economics make for a strong case.

Next Steps

Making the case for containers

Containers and VMs can work in harmony

Container virtualization is more than just hype

Dig Deeper on IT Ops Implications of Continuous Delivery