Definition

container (containerization or container-based virtualization)

Contributor(s): Alex Gillis, Jim O'Reilly, Sander van Vugt and Ryan Lanigan
This definition is part of our Essential Guide: Windows Server 2016 release broadens reach across IT spectrum

Containers are packages that rely on virtual isolation to deploy and run applications that access a shared operating system (OS) kernel without the need for virtual machines (VMs).

Container technology has roots in partitioning, dating back to the 1960s, and chroot process isolation developed as part of Linux in the 1970s. Its modern form is expressed in application containerization, such as Docker, and system containerization, such as LXC (Linux Containers). Both of these container styles enable an IT team to abstract application code from the underlying infrastructure, simplifying version management and enabling portability across various deployment environments.

Container images include the information that executes at runtime on the OS, via a container engine. Containerized applications can be composed of several container images. For example, a 3-tier application can be composed of front-end web server, application server and database containers, which each execute independently. Containers are inherently stateless and do not retain session information, although they can be used for stateful applications. Multiple instances of a container image can run simultaneously, and new instances can replace failed ones without disruption to the application's operation. Developers use containers during development and test, and increasingly, IT operations teams deploy in live production IT environments on containers, which can run on bare-metal servers, on VMs and cloud.

How containers work

Containers hold the components necessary to run desired software. These components include files, environment variables, dependencies and libraries. The host OS constrains the container's access to physical resources, such as CPU, storage and memory, so a single container cannot consume all of a host's physical resources.

Container image files are complete, static and executable versions of an application or service and differ from one technology to another. Docker images are made up of multiple layers, which start with a base image. The base image includes all of the dependencies needed to execute code in a container. Each image has a readable/writable layer on top of static unchanging layers. Because each container has its own specific container layer that customizes that specific container, underlying image layers can be saved and reused in multiple containers. An Open Container Initiative (OCI) image is made up of a manifest, file system layers and configurations. An OCI image has two specifications to operate: a runtime and an image specification. Runtime specifications outline the functioning of a file system bundle, which are files containing all necessary data for performance and runtimes. The image specification contains the information needed to launch an application or service in the OCI container.

The container engine executes images, and many organizations use a container scheduler and/or orchestration technology to manage deployments. Containers have high portability, because each image includes the dependencies needed to execute the code in a container. For example, container users can execute the same image on an Amazon Web Services (AWS) cloud instance during test, then an on-premises Dell server for production, without changing the application code in the container.

Containers vs. VMs

Containers are different from server virtualization in that a virtualized architecture emulates a hardware system. Each VM can run an OS in an independent environment and present to the application, via abstraction, a substitute to a physical machine. The hypervisor emulates hardware from pooled CPUs, memory, storage and network resources, which can be shared numerous times by multiple instances of VMs.

Containers vs. VMs
VMs take up more space because they need a guest OS to run. Containers don't consume as much space because each container shares the host's OS.

VMs can require substantial resource overhead, such as memory, disk and network input/output (I/O), because each VM runs an OS, meaning that VMs can be large and take longer to create than containers. Because containers share the OS kernel, only one instance of an OS can run many isolated containers. The OS supporting containers can also be smaller, with fewer features, than an OS for a VM or physical application installation.

Application containers and system containers

Application containers, such as Docker, encapsulate the files, dependencies and libraries of an application to run on an OS. Application containers enable the user to create and run a separate container for multiple independent applications or multiple services constitute a single application. For example, an application container would be well-suited for a microservices application, where each service that makes up the application runs independently from one another.

System containers, such as LXC, are technologically similar to both application containers and to VMs. A system container can run an OS, similar to how an OS would run encapsulated on a VM. However, system containers don't emulate the hardware of a system. Instead, system containers operate similarly to application containers, and a user can install different libraries, languages and system databases.

Benefits of containers

Because containers share the same OS kernel as the host, containers can be more efficient than VMs, which require separate OS instances.

Containers have better portability than other application hosting technologies: They can move among any systems that share the host OS type, without requiring code changes. This encapsulation of the application operating code in the container means that there are no guest OS environment variables or library dependencies to manage.

Proponents of containerization point to gains in efficiency for memory, CPU and storage as key benefits of this approach compared with traditional virtualization. Because containers do not have the overhead required by VMs, such as separate OS instances, it is possible to support many more containers on the same infrastructure. Container industry experts state that an average physical host could support dozens of VMs or hundreds of containers, but in actual operations, the host, container and VM sizes are highly variable and subject to the demands of a specific application or applications.

A major factor in the interest in containers is they are consistent throughout the application lifecycle. This makes for an agile environment and facilitates new approaches, such as continuous integration (CI) and continuous delivery (CD). They also spin up faster than VMs, which is important for distributed applications.

Disadvantages of containers

A potential drawback of containerization is lack of isolation from the host OS. Because containers share a host OS, security threats have easier access to the entire system when compared with hypervisor-based virtualization. One approach to addressing this security concern has been to create containers from within an OS running on a VM. This approach ensures that, if a security breach occurs at the container level, the attacker can only gain access to that VM's OS, not other VMs or the physical host.

Another disadvantage of containerization is the lack of OS flexibility. In typical deployments, each container must use the same OS as the base OS, whereas hypervisor instances have more flexibility. For example, a container created on a Linux-based host could not run an instance of the Windows Server OS or applications designed to run on Windows Server.

Monitoring visibility can be another issue. With possibly up to hundreds or more containers running on a server, it may be difficult to see what is happening in each container.

Various technologies from container and other vendors, as well as open source projects, are available and under development to address the operational challenges of containers, including security tracking systems, monitoring systems based on log data, and orchestrators and schedulers that oversee operations.

Container uses

Containers are frequently paired with microservices and the cloud but offer benefits to monolithic applications and on-premises data centers as well.

Containers are well-adapted to work with microservices, as each service that makes up the application is packaged in an independently scalable container. For example, a microservices application can be composed of containerized services that generate alerts, log data, handle user identification and provide many other services. Each service operates on the same OS while staying individually isolated. Each service can scale up and down to respond to demand. Cloud infrastructure is designed for this kind of elastic, unlimited scaling.

Traditional monolithic application architectures are designed so all the code in a program is written in a single executable file. Monolithic applications don't readily scale in the way that distributed applications do, but they can be containerized. For example, Docker Modernize Traditional Applications (MTA) helps users to transition monolithic applications to Docker containers as is, with adaptations for better scaling, or via a full rebuild and rearchitecting.

Container tool and platform providers

There are many vendors that offer container platforms and container management tools, such as cloud services and orchestrators. Docker and Kubernetes are well-known product names in the container technology space, and the technologies underpin many other products.

Docker is an open source application container platform designed for Linux and, more recently, Windows, Apple and mainframe OSes. Docker utilizes resource isolation features, such as cgroups and Linux kernels, to create isolated containers. Docker is an eponymous company, which sells enterprise supported container hosting and management products.

Microsoft offers containerization technologies, including Hyper-V and Windows Server containers. Both types are created, maintained and operated similarly, as they utilize the same container images. However, the services differ in terms of the level of isolation. Isolation in Windows Server containers is achieved through namespaces, resource control and other techniques. Hyper-V containers provide isolation through the container instances running inside a lightweight VM, which makes the product more of a system container.

The open source container orchestrator Kubernetes, created by Google, organizes containers into pods on nodes, which are the hosting resources. Kubernetes can automate, deploy, scale, maintain and otherwise operate application containers. A plethora of products are based on Kubernetes with added features and/or support, such as Rancher, Red Hat OpenShift and Platform9. Other orchestration tools are also available, such as Mesosphere DC/OS and Docker Swarm.

The major cloud vendors all offer diverse containers as a service (CaaS) products as well, including Amazon Elastic Container Service (ECS), AWS Fargate, Google Kubernetes Engine (GKE), Microsoft Azure Container Instances (ACI), Azure Kubernetes Service (AKS) and IBM Cloud Kubernetes Service, among many more. Containers can also be deployed on public or private cloud infrastructure without the use of dedicated container products from the cloud vendor.

This was last updated in July 2018

Continue Reading About container (containerization or container-based virtualization)

Join the conversation

6 comments

Send me notifications when other members comment.

Please create a username to comment.

How has containerization affected your data center?
Cancel
Hi,
What's the term "Overhead" means in vitualization?
As i'm a newbie, so please explain? :)
Cancel
Hi dave20, great question. In this case, overhead refers to the CPU, memory and disk resources associated with running multiple copies of an operating system. For example, since containers on the same physical server all share the same operating system kernel, you don't need to run multiple copies of the operating system on the same physical server. This reduces the CPU, memory and disk resources that would otherwise be used by multiple copies of the operating system.
Cancel
If multiple containers are run on top of a single Host OS and share the kernel then if I was to run a container app which required a significant amount of resource allocation along with other container apps that do not require as much, will I be exhausting the system or starving my other apps? I am learning about containerization right now, so please help me understand.
Cancel
Good question, utsa2016. The quick answer is that it depends, and that it falls on the server/virtualization admin to make sure they only deploy apps that the underlying hardware can support. Let me expand. In this case, I'll use Linux containers (LXC) as an example, but the underlying theory applies to other container platforms. Within the Linux kernel, cgroups feature allows an admin to isolate, limit and prioritize resources for certain processes. Linux containers rely on the cgroups feature to isolate and limit the resource access of containers. Therefore, applications within containers only have access to the resources you allocate. If all containers on a host are properly sized and limited (based on application needs), no application should be starved at the expense of another. So, in the case of your application that requires significant resources, you should calculate whether the physical host has enough resources to support the sum of the resources required by all containers on the host. Then, by limiting each container's access to only the resources you need, you should be able to avoid any performance problems associated with resource contention.
Cancel
Hi,
I am new to virtualization and containers. I want to know if there is any way to make the containers fail safe. What I mean is that if the server/machine running the Container fails will be able to transfer the container to a different server/machine automatically, such is possible in hypervisor based system(through the management software).
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

Powered by:

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

TheServerSide.com

SearchCloudComputing

DevOpsAgenda

Close