GP - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Explore the benefits of containers on bare metal vs. on VMs

Advances in container technology and cloud deployments have morphed the debate over container deployment on bare-metal servers or atop VMs, with strong pros and cons for each.

You know why you should consider containers. But do you know which type of infrastructure to deploy them on? Are containers on bare-metal servers a better choice than on VMs?

The answer, of course, depends on lots of variables. There are pros and cons of running containers on bare-metal hosts, as well as VMs.

Bare metal vs. VMs

The subject of the respective advantages and disadvantages of bare-metal servers vs. virtualized hosting environments is not a new issue. It has been on CTOs' minds since virtualization became widespread in data centers in the 2000s, long before anyone had heard of Docker containers, which debuted in 2013.

The main benefits of bare-metal servers include:

  • higher performance, because no system resources are wasted on hardware emulation;
  • full use of all machine resources, as none of them sit idle during high-demand periods; and
  • easier administration, because there are fewer hosts, network connections and disks to manage in the infrastructure.

VMs, on the other hand, offer the following advantages:

  • Applications can be moved between hosts easily by transferring VM images from one server to another.
  • There is isolation among applications that run on different VMs. This structure provides some security benefits and can reduce management complexity.
  • A consistent software environment across infrastructure can be created when all apps are on the same type of VM, even if the underlying host servers are not homogenous.
The main benefits of bare-metal servers include higher performance, because no system resources are wasted on hardware emulation.

But VMs also come with some drawbacks:

  • Server resources can go underutilized. For example, if you allocate storage space on a server host to create a VM disk image, that portion becomes unavailable for other purposes -- even if the VM to which the disk is attached does not use all of it.
  • VMs can't access physical hardware directly in most cases. For instance, the VM can't offload compute operations to a GPU on its host -- at least not easily -- because the VM is abstracted from an underlying host environment.
  • VMs generally don't perform as well as physical servers, due to the layer of abstraction added between the application and the hardware.

Modern virtualization platforms can help admins work around these limitations. For example, an admin can create a dynamic disk image that expands as VM usage increases to avoid locking up storage space on a host before a guest actually uses it. Pass-through features also provide VMs with direct access to physical hardware on a host. However, these hacks don't always work well. They are not supported on all types of hosts and guest OSes, and they create additional administrative burdens. If the apps you want to run require bare-metal access, then it's best to run those apps on a bare-metal server.

Or you could run your apps inside containers on bare metal to get the best of both worlds.

Containers vs. VMs
Containers are an OS-level isolation technology, while VMs isolate applications each with a separate OS.

Square the circle: Run containers on bare metal

Containers on bare-metal hosts get many of the advantages VMs offer but without the drawbacks of virtualization:

  • Gain access to bare-metal hardware in apps without relying on pass-through techniques, because the app processes run on the same OS as the host server.
  • Have optimal use of system resources. Although you can set limits on how much compute, storage and networking containers can use, they generally don't require these resources to be dedicated to a single container. A host can, therefore, distribute use of shared system resources as needed.
  • Get bare-metal performance to apps, because there is no hardware emulation layer separating them from a host server.

In addition, by hosting containers on bare metal, you get the benefits that have traditionally been possible only with VMs:

  • Gain the ability to deploy apps inside portable environments that can move easily between host servers.
  • Get app isolation. Although containers arguably don't provide the same level of isolation as VMs, containers do enable admins to prevent apps from interacting with one another and to set strict limits on the privileges and resource accessibility associated with each container.

In short, run containers on bare metal to square the circle -- do what seems impossible. Reap all the benefits of bare-metal servers' performance and hardware accessibility, and take advantage of the portability and isolation features seen with VMs.

The downsides to containers on bare metal

You probably are wondering why we don't run all containers on bare metal. Consider the following drawbacks of using bare-metal servers to host a container engine, rather than VMs:

  • Physical server upgrades are difficult. To replace a bare-metal server, you must recreate the container environment from scratch on the new server. If the container environment were part of a VM image, you could simply move the image to the new host.
  • Most clouds require VMs. There are some bare-metal cloud hosts out there, such as Rackspace's OnMetal offering and Oracle's Bare Metal Cloud Services. But bare-metal servers in cloud computing environments usually cost significantly more -- if the cloud vendor even offers it. By and large, most public cloud providers only offer VMs. If you want to use their platforms to run containers, you'll have to deploy into VMs.
  • Container platforms don't support all hardware and software configurations. These days, you can host almost any type of OS on a VM platform such as VMware or a kernel-based VM. And you can run that virtualization platform on almost any kind of OS or server. Docker is more limited and can run only on Linux, certain Windows servers and IBM mainframes if hosted on bare metal. For example, bare-metal servers that run Windows Server 2012 -- which Docker does not currently support -- require a VM on top of the Windows host.
  • Containers are OS-dependent. Linux containers run on Linux hosts; Windows containers run on Windows hosts. A bare-metal Windows server requires a Linux VM environment to host Docker containers for an app compiled for Linux. However, there are technological developments in this space (see sidebar).
  • Bare-metal servers don't offer rollback features. Most virtualization platforms enable admins to take VM snapshots and roll back to that captured configuration status at a later time. Containers are ephemeral by nature, so there is nothing to roll back to. You might be able to use rollback features built into the host OS or file system, but those are often a less seamless experience. To take advantage of simple system rollback, host containers on a VM.

Container orchestrators on bare metal

Most container orchestrators, including Kubernetes and Docker swarm mode, support bare-metal deployments just as well as they do containers in VMs or in the cloud.

The only caveat is that certain Kubernetes distributions, such as Kubicorn, don't work on bare metal. If your DevOps team relies on a particular orchestrator or Kubernetes distribution, ensure it can support bare metal before you go this route.

Linux containers on bare-metal Windows

Historically, there was not an easy, efficient way to run Linux containers on a Windows host. However, Linux Containers on Windows (LCOW) is a tool from Docker that automatically runs Linux containers on a Windows host.

Technically, LCOW needs the Windows host to set up a Hyper-V hypervisor, so the Linux containers actually don't run directly on bare metal inside the Windows environment. However, the hypervisor footprint is miniscule, and the hypervisor is fully managed. Consequently, the performance and management drawbacks associated with hosting containers inside a VM don't exist with LCOW, and the technology removes another historical downside of running containers on bare metal.

This was last published in November 2018

Dig Deeper on Managing Virtual Containers

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

Which is your pick for containers: VMs or bare-metal servers?
Cancel
Could you not use Kubernetes as an orchestration layer on top of bare-metal to offset some of the drawbacks? Also, this doesn't have a published date, but with Windows 2016 you can get a Linux container to run on Windows. https://forums.docker.com/t/linux-container-in-w2k16/26321/5
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

TheServerSide.com

SearchCloudComputing

DevOpsAgenda

Close