shyshka - Fotolia

Virtualization containers force server architecture to grow up

Much like the advent of hypervisor virtualization, container-based virtualization requires updates to server architectures for better security alongside high performance.

Recent architectural and cloud computing advances are helping put virtualization containers into production data centers.

The containers approach to virtualization summons déjà vu for the data center industry, with the same issues of cross-tenant snooping vulnerabilities that hypervisors first raised. When hypervisors exposed enterprise information to unauthorized or accidental access, Intel added hardware features to fence tenant memories and prevent data leakage. When solid-state disk (SSD) instance overprovisioning came around, the industry again discovered and overcame data vulnerabilities.

Now, along comes the rapidly evolving containers approach. Containers run on a single OS copy and those hardware memory control tools can't help fence multiple containers.

For data security, data centers could run containers within a VM from a hypervisor, but this potentially slows down container creation, eliminating one of virtualization containers' main selling points. IT architects can build hundreds of containers inside a single VM, therefore fencing them from another user's hundreds of containers in another VM, but the result is cumbersome to operate. IT teams need equivalent fencing mechanisms from virtualization's evolution to support containers.

Intel has architectural developments that target virtualization containers. While changing hardware to add additional DMA remapping boundary controls, which limit what system memory each device can access, is a long task, Intel introduced Clear Containers as a way to merge a lightweight hypervisor, like kvmtool, with the containers approach. This incorporates extensive memory-sharing capability.

The implications of small-sized, credibly secure containers are that we will need fewer servers to achieve the same workload.

The extra overhead from adding a hypervisor lite is around 20 MB, which is mostly inconsequential in production. More importantly, the startup time for a new container is around 150 milliseconds. This is a little slower than Docker containers, but fast enough for most uses.

Intel estimates that an administrator could run 3,500 hypervisor lite containers on a server with 128 GB of DRAM. These would be a bit I/O starved, but the number is surprisingly high, and highlights the value of the containers approach.

Intel also developed features targeting increased security with virtualization containers. Kernel-Guard Technology places a small monitor between the hardware and kernel to protect the kernel and registers from modification. Cloud Integrity Technology creates a hash for key modules and invokes the Trusted Platform Module to sign them, so that they can be verified as valid. Software Guard Extension allows a secure enclave in memory to be defined as an assist to maintaining encryption key safety. Intel plans further evolutions to improve containers' performance, including QEMU machine emulator incorporation and some optimizations in memory usage.

What is still missing is file access fencing to protect against data sharing. This is as much a hypervisor and operating system issue as it is an issue for containers. Intel seeks to address the issue with non-volatile memory express (NVMe), a queue-based storage interface that avoids the multilayer SCSI stack. NVMe allows I/O to tie back to VMs or containers directly, thanks to up to 64,000 queues. It is a possible way to mechanize storage I/O fencing.

CoreOS announced Intel Clear Containers for their Rocket containers solution. Clear Containers could open up the deployment of containers -- if I were a hypervisor vendor, I would be very concerned at the pace at which containers are converging on my spec. One might make a case for coexistence, but Clear Containers moves the boundary much closer to containers territory.

Data center implications of virtualization containers

The implications of small-sized, credibly secure containers are that we will need fewer servers to achieve the same workload. Since containers deploy easily on existing hardware, there's little need to purchase new equipment for perhaps two years. Data center budgets will be spent elsewhere as a result.

Boosting I/O performance to match the number of containers is a bit of a challenge, but local SSDs provide instance storage to the containers with terrific IOPS levels. The extra performance means the containers won't starve for I/O, while tiering allows smaller amounts of primary SSD storage and cheaper secondary SATA storage.

Enhancement to the virtualization container approach could be placed in context of Intel 3D XPoint near-in memories. Adding these to a server would boost overall performance drastically, since one version of X-Point runs as a memory extender on the DIMM busses.

Good I/O performance will boost container acceptance, but all this leaves some questions on network performance. The single common operating system allows quite a bit of traffic reduction over the multiple OSes of the hypervisor approach, but a dense container solution still will increase network load.

Data centers have 10 Gb Ethernet available, with 25 GbE coming in 2016. These faster bandwidths give some network congestion relief, though at added cost. One debate we will have is the use of RDMA over Ethernet to reduce server overhead load; the savings are large. Chelsio's iWarp appears to be the commercial winner for Ethernet RDMA. This will allow more containers to run, and networks will be more efficient.

Taken together, the density increases and performance gains in servers, storage and networks will reduce the IT spend and allow data centers to remain at more or less the same footprint for a while.

Next Steps

SDN systems designed for containers and developers

Multiple container technologies debut in Windows Server 2016

Should you adopt containers?

This was first published in January 2016

Dig Deeper on IT Ops Implications of Continuous Delivery

PRO+

Content

Find more PRO+ content and other member only offers, here.

Related Discussions

Jim O'Reilly asks:

Why would you deploy container-based virtualization in your production data centers?

1  Response So Far

Join the Discussion

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchSOA

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

Close