agsandrew - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Docker infrastructure on VMs holds promise

Containers outperform virtual machines, but IT teams combine the virtualization technologies instead of choosing one over the other. With VM-hosted containers, disruptions decrease while capacity utilization increases.

With respect to capacity planning, containers boast two advantages over virtual machines: lower overhead and free movement across infrastructure types.

By sharing hardware as well as the operating system, Docker infrastructure consumption, particularly storage and memory, is lower per container than a VM. Dozens of VMs fit on one average physical host, while the same host could run hundreds of containers. In actual implementations, however, most IT teams run containers on VMs rather than bare metal.

"[Containers] need little extra resources to run over the bare application," said Gustavo Muslera, system administrator at LACNIC, the Regional Internet Registry for Latin American and Caribbean regions. But, he added, "you still need a VM to run them."

Muslera sees simplicity, portability and app-orientated operations in today's container ecosystem. He uses Docker containers for preproduction app evaluation and to test apps in a continuous integration scenario.

While containers aren't right for all applications, they appeal to him as a Linux administrator. He likes their portability from development to test, staging and production; the streamlined backup; and replication from a shared base image.

Containerization changes how Muslera approaches storage, network and compute capacity planning problems, even when he's not actually using containers for a deployment. To him, the essence of the problem is now different.

When building physical infrastructure to host virtual servers, Muslera implements Docker-like concepts, including mounted storage volumes and forwarded ports, to design the architecture to accommodate future migration or scaling into containers. Muslera also endeavors to develop a fast path to move services away from the most heavily used servers, storage and network connections with this Docker infrastructure.

"[Docker containers] are part of us improving the infrastructure footprint that we have," said Ajay Dankar, senior director of product management at PayPal.

The online payment company's efforts to modernize include growing from three data centers in the U.S. to a global presence and rearchitecting applications to take advantage of a horizontally scalable, repeatable model. Containers alleviate the concerns around updates and patching -- concerns that only grow as infrastructure becomes more distributed, Dankar said.

Dankar's team creates platform as a service capabilities on the company's OpenStack private cloud infrastructure for PayPal's developers, who are its clients. The team deploys most applications on pools of VMs, and, to avoid disruption, containerizes apps. Those are mapped to the same or equivalent VMs. While it isn't as efficient as bare metal, more than one container deploys and runs in that single VM. Sidebar containers -- essential services for the application -- share it, increasing resource utilization.

As PayPal moves from a container infrastructure supporting a few hundred instances of containerized apps to eventually converting 10 million lines of code in various stacks, Dankar expects the lower-overhead containers to reduce the company's spending on IT hardware. His focus for now is on operational concerns, such as how to change the patch process for this container infrastructure, and how to monitor it.

Containers improve workflow

Resource overhead isn't a concern for every container adopter.

"Capacity wasn't important. Stability was the primary driver," said Stephen Eaton, infrastructure technical lead at Dealertrack Technologies, a holding of Atlanta-based Cox Enterprises.

Encapsulating applications in containers that float over infrastructure made the workflow easier for the entire IT group. However, as he ramps up containerization -- the goal is 80% of the group's apps on Docker containers within a year -- Eaton will be closely watching network-attached storage performance. With five times as many apps using the storage resources, will there be latency with logs or scaling that necessitates changes to the underlying Docker infrastructure?

Jay Lyman, cloud management and containers research manager at 451 Research, bemoans the lack of best practices and standards in containerization. A maximum size for containers or strategies to prevent sprawl, he said, would be especially useful. These missing guidelines are among the reasons ops teams choose to put containers in VMs, he said.

Containers also change the equations for dynamic and static load balancing. While container-monitoring capabilities are not yet close to those available for virtualization, log-monitoring tools such as Sysdig and Splunk are working on the visibility issue with admin-friendly dashboards. Eaton uses Sysdig cloud to monitor each node in every app and says it's always open on his screen.

These operational problems are just beginning to be sorted out, Lyman said. That means container infrastructure will rely on VMs for at least the next two to three years, he predicts, until technological maturation and capacity utilization combine to pressure organizations to cut out the middle man. Administrators and DevOps engineers foresee a future where orchestration from Apache Mesos, Docker Swarm or Google Kubernetes replaces VM-based container management.

Infrastructure vendors talk containers

While the new players -- CoreOS, Docker, Google Kubernetes, Mesosphere and others -- attract a lot of interest, established IT infrastructure vendors are also responsive to containers.

"Most of the big vendors have a container strategy," Lyman said, and your networking, storage, OS and management software vendors already know your organization's infrastructure setup. "It can make change less arduous if you're sticking with some existing stuff," he said.

Muslera suggests evaluating the OS carefully, as older-generation kernel distributions don't always work well for Docker infrastructure. Applications that need special kernel modules may be incompatible with containers for security reasons.

It's not advisable to run containers on non-cloud infrastructure.

"You won't get the advantages of containers, and you might incur the penalties," Lyman said. But he also noted that this is unlikely to stall containers, as organizations are already aggressively moving workloads into private and public cloud, as well as hybrid deployments.

Next Steps

Container management looms over IT ops

Containers present a networking opportunity

Storage concerns around stateful containers

Securing live containers requires new ideas

This was last published in December 2016

Dig Deeper on Managing Virtual Containers

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do you run containers on bare-metal infrastructure or virtual machines, or both?
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

Close