BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Application containerization is an OS-level virtualization method used to deploy and run distributed applications without launching an entire virtual machine (VM) for each app. Multiple isolated applications or services run on a single host and access the same OS kernel. Containers work on bare-metal systems, cloud instances and virtual machines, across Linux and select Windows and Mac OSes.
How application containerization works
Application containers include the runtime components -- such as files, environment variables and libraries -- necessary to run the desired software. Application containers consume fewer resources than a comparable deployment on virtual machines because containers share resources without a full operating system to underpin each app. The complete set of information to execute in a container is the image. The container engine deploys these images on hosts.
The most common app containerization technology is Docker, specifically the open source Docker Engine and containerd based on universal runtime runC. The main competitive offering is CoreOS' rkt container engine that relies on the App Container (appc) spec as its open, standard container format, but it also can execute Docker container images. There is concern about app containerization vendor lock-in from users and ecosystem partners, but this is tempered, in part, by the large amount of open source technology underpinning container products.
Application containerization works with microservices and distributed applications, as each container operates independently of others and uses minimal resources from the host. Each microservice communicates with others through application programming interfaces, with the container virtualization layer able to scale up microservices to meet rising demand for an application component and distribute the load. This setup also encourages flexibility. For example, if a developer desires a variation from the standard image, he can create a container that holds only the new library.
To update an application, a developer makes changes to the code in the container image, then redeploys that image to run on the host OS.
Application containerization vs. virtualization and system containers
Server virtualization abstracts the operating system and application from the underlying hardware or virtual resources. A hypervisor layer resides between memory, compute and storage and the operating system and application and services. Each application runs on its own version of an OS. This allows different applications on the same host to use different OS versions, but it also consumes more resources than containers, and it requires more OS licenses than a containerized setup.
Containers can run inside of virtual machines, which means a host machine could have multiple OSes supporting multiple containers all sharing the same physical resources. Application containers create a safe space for app code to consume host resources without acknowledging or depending upon the existence of other applications using the same OS.
System containers perform a role similar to virtual machines, but without hardware virtualization. System containers, also called infrastructure containers, include the host operating system, application libraries and execution code. System containers can host application containers.
Although system containers also rely on images, they are generally long-standing and not ephemeral like application containers. An administrator updates and changes system containers with configuration management tools, rather than destroying and rebuilding images when a change occurs.
Canonical Ltd., developer of the Ubuntu Linux operating system, leads the LXD system containers project. Another system container option is OpenVZ.
Application containerization benefits and drawbacks
Proponents of containerization point to gains in efficiency for memory, CPU and storage compared to traditional virtualization and physical application hosting. Without the overhead required by VMs, it is possible to support many more application containers on the same infrastructure.
Portability is another benefit. As long as the OS is the same across systems, an application container can run on any system and in any cloud without requiring code changes. There are no guest OS environment variables or library dependencies to manage.
Reproducibility is another advantage to containerizing applications, which is one reason why container adoption often coincides with the use of a DevOps methodology. Throughout the application lifecycle from code build through test and production, the file systems, binaries and other information stay the same -- all the development artifacts become one image. Version control at the image level replaces configuration management at the system level.
One potential drawback of containerization is lack of isolation from the core OS. Because application containers are not abstracted from the host OS on a VM, some experts warn that security threats have easier access to the entire system. Security scanners and monitoring tools can protect the hypervisor and OS, but not application containers. However, containerization also purports security benefits because of the increased isolation of application packages and more specialized, smaller-footprint OSes that run them. Policies dictate privilege levels for containers to create secure deployments.
Application containerization is a relatively new and rapidly developing enterprise-level IT methodology, which engenders change and instability. Technology improvements could address bugs and increase stability in container technology. Consequently, another con of containerization is lack of education and skill among IT workers: Compared to the server virtualization field, there is a dearth of administrators who understand containers and how to work with them.
OS lock-in could also pose a problem, but developers already write applications to run on specific operating systems. If an enterprise needs to run a containerized Windows application on Linux servers, or vice versa, a compatibility layer or nested virtual machines would solve the problem, but it would increase complexity and resource consumption.