Application containerization is an OS-level virtualization method used to deploy and run distributed applications without launching an entire virtual machine (VM) for each app. Multiple isolated applications or services run on a single host and access the same OS kernel. Containers work on bare-metal systems, cloud instances and virtual machines, across Linux and select Windows and Mac OSes.
Application containerization benefits and drawbacks
Supporters of containerization point to efficiency for memory, CPU and storage compared to traditional virtualization and physical application hosting. Without the overhead required by VMs, it is possible to support many more application containers on the same infrastructure.Content Continues Below
Portability is another benefit. If the OS is the same across systems, an application container can run on any system and in any cloud without requiring code changes. There are no guest OS environment variables or library dependencies to manage.
Reproducibility is another advantage implementing application containerization. This is one reason why container adoption often fits within a DevOps methodology. Throughout the application lifecycle from code build through test and production, the file systems, binaries and other information stay the same. All the development artifacts become one image. Version control at the image level replaces configuration management at the system level.
One potential drawback of containerization is lack of isolation from the core OS. Because application containers are not abstracted from the host OS on a VM, some experts warn that security threats have easier access to the entire system. Security scanners and monitoring tools can protect the hypervisor and OS, but not application containers. However, containerization also offers some improvements to security. This is due to the increased isolation of application packages and more specialized, smaller-footprint OSes that run them. Policies dictate privilege levels for containers to create secure deployments.
Also, application containerization is a relatively new and rapidly developing enterprise-level IT methodology. As a result, change and instability are unavoidable. This can be positive or negative, as technology improvements could address bugs and increase stability in container technology. However, there is generally a lack of education and skill among IT workers. Compared to the server virtualization field, there are far less administrators who understand containers.
OS lock-in could also pose a problem, but developers already write applications to run on specific operating systems. If an enterprise needs to run a containerized Windows application on Linux servers, or vice versa, a compatibility layer or nested virtual machines would solve the problem. However, this would increase complexity and resource consumption.
How application containerization works
Application containers include the runtime components -- such as files, environment variables and libraries -- necessary to run the desired software. Application containers consume fewer resources than a comparable deployment on virtual machines because containers share resources without a full operating system to underpin each app. The complete set of information to execute in a container is the image. The container engine deploys these images on hosts.
The most common app containerization technology is Docker, specifically the open source Docker Engine and containers based on universal runtime runC. Docker Swarm is a clustering and scheduling tool. Using Docker Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. The main competitive offering is CoreOS' rkt container engine. It relies on the App Container (appc) spec as its open, standard container format, but it also can execute Docker container images. There is concern about app containerization vendor lock-in from users and ecosystem partners. However, this is tempered, in part, by the large amount of open source technology underpinning container products.
Application containerization works with microservices and distributed applications, as each container operates independently of others and uses minimal resources from the host. Each microservice communicates with others through application programming interfaces. The container virtualization layer is able to scale up microservices to meet rising demand for an application component and distribute the load. With virtualization, the developer can present a set of physical resources as disposable virtual machines. This setup also encourages flexibility. For example, if a developer desires a variation from the standard image, he or she can create a container that holds only the new library in a virtualized environment.
To update an application, a developer makes changes to the code in the container image. The developer then redeploys that image to run on the host OS.
Application containerization vs. virtualization and system containers
Server virtualization abstracts the operating system and application from the underlying hardware or virtual resources. A hypervisor layer resides between memory, compute and storage and the operating system and application and services. Each application runs on its own version of an OS. This allows different applications on the same host to use different OS versions. However, it also consumes more resources and requires more OS licenses than a containerized setup.
Containers can run inside of virtual machines. This means a host machine could have multiple OSes supporting multiple containers, all sharing the same physical resources. Application containers create a safe space for app code to consume host resources without acknowledging or depending upon the existence of other applications using the same OS.
System containers perform a role similar to virtual machines, but without hardware virtualization. System containers, also called infrastructure containers, include the host operating system, application libraries and execution code. System containers can host application containers.
Although system containers also rely on images, they are generally long-standing and not momentary like application containers. An administrator updates and changes system containers with configuration management tools, rather than destroying and rebuilding images when a change occurs.
Canonical Ltd., developer of the Ubuntu Linux operating system, leads the LXD system containers project. Another system container option is OpenVZ.
Types of app containerization technology
There are other application containerization technologies in addition to Docker, including:
- Apache Mesos -- an open source cluster manager. It handles workloads in a distributed environment through dynamic resource sharing and isolation. Mesos is suited for the deployment and management of applications in large-scale clustered environments.
- Google Kubernetes Engine -- a managed, production-ready environment for deploying containerized applications. It enables rapid app development and iteration by making it easy to deploy, update and manage applications and services.
- Amazon Elastic Container Registry (ECR) -- an Amazon Web Services product that stores, manages and deploys Docker images, which are managed clusters of Amazon EC2 instances. Amazon ECR hosts images in a highly available and scalable architecture, enabling developers to dependably deploy containers for their applications.
- Azure Kubernetes Service (AKS) -- a managed container orchestration service based on the open source Kubernetes system. AKS is available on the Microsoft Azure public cloud. Developers can use AKS to deploy, scale and manage Docker containers and container-based applications across a cluster of container hosts.
Considerations when choosing a platform for containerization
When selecting a platform for containerization developers should take these things into consideration:
- Application architecture -- focus on the application architecture decisions they need to make, such as whether the applications are monolithic or microservices and are they stateless or stateful.
- Workflow and collaboration -- consider the changes to the workflows and whether the platform will enable them to easily collaborate with other stakeholders.
- DevOps -- consider the requirements for using the self-service interface to deploy their apps using the DevOps pipeline.
- Packaging -- consider the format and tools to use the application code, dependencies, containers and their dependencies.
- Monitoring and logging -- ensure that the available monitoring and logging options meet their requirements and work well with their development workflows.
IT operations should consider:
- Architectural needs of applications -- ensure that the platform meets the architectural needs of the application as well as the storage needs for stateful applications.
- Legacy application migration -- the platform and tooling around the platform must support any legacy applications that have to be migrated.
- Application updates and rollback strategies -- work with the developers to define application updates and rollbacks to meet the service level agreement
- Monitoring and logging -- put plans in place for the right infrastructure and application monitoring and logging tools to collect a variety of metrics.
- Storage and network -- ensure that the necessary storage clusters, network identities and automation to handle the needs of any stateful applications are in place.
- Container technology was first introduced in 1979 with Unix version 7 and the chroot system. Chroot ushered in the beginning of container-style process isolation by restricting the file access of an application to a specific directory -- the root -- and its children. A key benefit of chroot separation was improved system security. An isolated environment couldn't compromise external systems if an internal vulnerability was exploited.
- FreeBSD introduced the jail command into its operating system in March 2000. The jail command was much like the chroot command. However it included additional process sandboxing features to isolate file systems, networks and users. FreeBSD jail provided the ability to assign an IP address, configure custom software installations as well as make modifications to each jail. However, applications within the jail had limited capabilities.
- Solaris containers, which were released in 2004, created full application environments via Solaris Zones. Zones enabled a developer to give an application full user, process and file system space, as well as access to the system hardware. But the application was only able to see what was within its own zone.
- In 2006, Google launched process containers designed for isolating and limiting the resource use of a process. The process containers were renamed control groups (cgroups) in 2007 so as not to be confused with the word container.
- Then cgroups were merged into Linux kernel 2.6.24 in 2008. This led to the creation of what's now known as the LXC (Linux containers) project. LXC provided virtualization at the OS level by enabling multiple isolated Linux containers to run on a shared Linux kernel. Each of these containers had its own process and network space.
- Google changed containers again in 2013 when it open-sourced its container stack as a project called Let Me Contain That For You (LMCTFY). Using LMCTFY, developers could write container-aware applications. This meant that they could be programmed to create and manage their own sub-containers. In 2015, Google stopped work on LMCTFY, choosing instead to contribute the core concepts behind LMCTFY to the Docker project libcontainer.
- Docker was released as an open source project in 2013. With Docker, containers could be packaged so that they could be moved from one environment to another. Initially, Docker relied on LXC technology. However, LXC was replaced with libcontainer in 2014. This allowed containers to work with Linux namespaces, libcontainer control groups, capabilities, AppArmor security profiles, network interfaces and firewall rules.
- In 2017, companies such as Pivotal, Rancher, AWS and even Docker changed gears to support the open source Kubernetes container scheduler and orchestration tool.