Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

The OS reigns over container hosting decisions

Before you get too excited about the benefits of containers, get a handle on limitations: a lack of interoperability across OS platforms, networking constraints and security woes.

Hype gets ahead of reality, and containerization hasn't been immune to exaggeration or hyperbole.

Developers and DevOps teams must consider issues related to the image and server runtime because containers don't just run anywhere and encapsulate everything. Container hosting requires a definitive OS choice because the container technology maps to certain surface areas of the OS kernel, making the two intrinsically tied; container portability from one OS to another requires some extra legwork. For example, Windows has Docker format containers and so does Red Hat Enterprise Linux (RHEL). But a Windows container doesn't just automatically run on RHEL. Similarly, a RHEL container might technically work on a different Linux host, but it likely will experience problems with nonexistent system calls and other expectations of the OS.

Cross-platform constraints

Docker was developed on Linux as an extension to the native LXC container technology. LXC defines a standard format for bundling an application and its dependencies, such as libraries and path names, into a single executable image. The Docker standard guarantees that the execution environment exposed to an application is identical across Docker container runtime engines. LXC establishes an isolated application sandbox but, unlike Docker, does not abstract system-specific details, such as network and storage configurations, or paths, such as library or log file locations. A Docker image can run unchanged on any compatible runtime engine, which was sufficient when Docker was only available on Linux systems.

Docker Engine has been ported to Windows and Apple OSes, highlighting OS-specific container hosting constraints. A Docker client natively running on Linux, Windows or macOS connects to a local Docker daemon running as a VM on a hosted Type 2 hypervisor, such as VirtualBox. This VM provides the necessary features to host Linux-based containers on a foreign OS, such as Windows. The containerized application runs on a Linux host and cannot access libraries or other features native to the host OS, such as Microsoft's .NET framework. Applications that require a Windows kernel can be containerized on a Docker-compliant system using Windows containers, currently on Windows Server 2016 or Windows 10. These containers can't run on a Linux or macOS system without the help of a hosted Windows VM.

From one Linux box to another

Docker is based on features of the Linux kernel, not the overall package of any particular distribution. A core tenet of Linux kernel development is backward compatibility for new releases. As long as a containerized application is compatible with the minimum requirements of Docker Engine -- namely kernel 3.10 and iptables 1.4 or higher -- it's reasonable to choose any current Linux distribution as the container host.

Containers are immutable, with distro specifics, such as library versions, configurations and directories, bundled and isolated within the image. A container built and operating on a development system will run the same hosted on an entirely different production system, even when the host is a cloud service, such as Amazon Web Services (AWS) EC2 Container Service (ECS).

Container isolation and the threats to it

Containers are highly isolated from one another; processes running in one can't see, affect or interact with processes running in another on the same host. The security risk to containers from host platform attacks, such as a network distributed denial-of-service attack, is called shared surface area.

Containers don't share the network stack and thus cannot access the network sockets, ports or interfaces of another container. While this level of encapsulation is critical for multi-tenant IT operations security, it complicates matters for modular applications built using a microservices architecture. These are often structured around a set of services, running as individual processes in separate containers that communicate over a REST API.

Containers have a unique, dynamic IP address by default, so a shared container host does not automatically establish network connections for the workloads using it. The Docker runtime engine provides a default virtual bridge network that forwards packets to every network attached to it. This enables containers that share a host to connect over the network and bind to the bridge for easy communication.

The bridge network is an all-or-nothing communication path. But developers can define networks to control access and allow only certain containers to communicate over certain ports. User-defined networks also have persistent IP addresses and access to Docker's embedded domain name system server for name resolution to facilitate intercontainer communication.

For container hosting on internal systems, such as a Kubernetes cluster dedicated to a known set of containerized workloads, intercontainer communication can use the default bridge.

Operations professionals using containers targeted to public cloud infrastructure, like AWS, Azure or Google Cloud Platform, should investigate specific features supported by each. Documentation is often unclear about which networking modes are supported for production container workloads. For example, AWS ECS supports the default bridge, host mode for direct mapping to the host network, and isolation with no networking. It doesn't mention user-defined networks, so assume they aren't supported. Cloud services have other ways to control container exposure via their particular identity and access management services, so perform due diligence before applications based on multiple container-based microservices deploy.

Containers simplify and streamline application deployment on different infrastructure -- provided that users understand and work around their limitations and follow some simple rules. Linux developers should use a modern distribution running kernel 3.10 or higher to build container images. Windows developers must use Windows Server 2016 or Windows 10 as a development and deployment platform -- or a compliant container service, such as Azure -- to host Window containers. It's possible to run a VM on a Linux host, but this introduces performance-robbing overhead and isn't advised.

Next Steps

Do you need VMs for containers?

Docker pushes security into container technology

Docker alternatives fit specific enterprise uses

Why big servers are big news in container hosting shops

This was last published in July 2017

Dig Deeper on Managing Virtual Containers

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

Close