artemegorov - stock.adobe.com

Get started Bring yourself up to speed with our introductory content.

Dive into the decades-long history of container technology

Container technology is almost as old as VMs, although IT wasn't talking about it until Docker, Kubernetes and other tech made waves that caused a frenzy of activity.

The rapid development of containerization in the past decade has changed the dynamic of modern IT infrastructure. The number of container technology vendors vying for customers has more than tripled within the past five years. But the history of container technology isn't so new as Docker's debut in 2013.

VM partitioning is as old as the 1960s, enabling multiple users to access a computer concurrently with full resources via a singular application each. The following decades were marked by widespread VM use and development. The modern VM serves a variety of purposes, such as installing multiple OSes on one machine to enable it to host multiple applications with specific, unique OS requirements that differ from each other.

The history of container technology leapt forward with the development of chroot in 1979, in version 7 of Unix. Chroot marked the beginning of container-style process isolation by restricting an application's file access to a specific directory -- the root -- and its children. A key benefit of chroot separation was improved system security, such that an isolated environment could not compromise external systems if an internal vulnerability was exploited.

The 2000s were alight with container technology development and refinement. Google introduced Borg, the organization's container cluster management system, in 2003; it relied on the mechanisms for isolation that Linux already had in place, said Tim Hockin, principal software engineer at Google. In those early days in the history of container technology, security wasn't much of a concern: Anyone could see what was going on inside the machine, which enabled a system of accounting for who was using the most memory and how to make the system perform better. Nevertheless, this kind of container technology could only go so far, which led to the development of process containers, which became control groups (cgroups) as early as 2004. Cgroups noted the relationships between processes and reined in users' access to specific activity and memory volumes. The cgroup concept was absorbed into the Linux kernel in January 2008, after which the Linux container technology LXC emerged. Namespaces developed shortly thereafter to provide the basis for container network security -- to hide a user's or group's activity from others.

Timeline of containerization history

Docker floated onto the scene in 2013 with an easy-to-use GUI and the concepts manageable in one place, Hockin said. Because Docker enabled multiple applications with different OS requirements to run on the same OS kernel in containers, IT admins and organizations saw opportunity for simplification and resource savings. Within a month of its first test release, Docker was the playground of 10,000 developers. By Docker's 1.0 release in 2014, the software had been downloaded 2.75 million times -- and, within a year thereafter, over 100 million.

Containers have a significantly smaller resource footprint than VMs do, are faster to spin up and down, and require less overhead to manage. Unlike VMs, which must each encapsulate a fully independent OS and other resources, containers share the same OS kernel and use a proxy system to connect to the resources they need, depending upon where those resources are located. Concern and hesitation arose in the IT community regarding the security issue of a shared OS kernel: A vulnerable container could mean a vulnerable ecosystem without the right precautions baked in to the container technology. Additional complaints early in the modern history of containers bemoaned the lack of data persistence, which is important to the vast majority of enterprise applications. Efficient networking also posed problems, as well as the logistics of regulatory compliance and distributed application management.

Container technology ramped up in 2017. Companies such as Pivotal, Rancher, AWS and even Docker changed gears to support the open source Kubernetes container scheduler and orchestration tool, cementing its position as the default container orchestration technology. In April, Microsoft enabled organizations to run Linux containers on Windows Server, which is a major development for Microsoft shops that want to containerize applications and stay compatible with their existing systems.

Container vendors have, over time, addressed security and management issues with tool updates, additions, acquisitions and partnerships, although that doesn't mean containers are perfect in 2018.

We already see people pressing the boundaries of what they have and [who] want more [functionality].
Tim HockinPrincipal software engineer, Google

Cloud container management, accompanied by the necessary monitoring, logging and alert technology, is an active space for container-adopting organizations. Containers offer more benefits for distributed applications, particularly microservices, than for larger, monolithic ones. Each independent service can be fully contained and scaled independently from others with the help of an orchestrator tool, such as Kubernetes, which reduces resource overhead on applications with features that aren't as heavily used as others. To that end, various public and private cloud providers offer managed container services -- usually via Docker or Kubernetes -- to make container deployment in the cloud more streamlined, scalable and accessible by administrators. AI and machine learning technologies are similarly attracting interest and participation among enterprises both on and off the cloud for improved metrics and data analysis, among other benefits, such as error prediction, automated alerts and incident resolution.

And the history of container technology has not come to an end. "The Linux kernel isn't perfect [at isolation], and it never will be," Hockin said. But that doesn't mean that people aren't trying to resolve the imperfections that remain. "We already see people pressing the boundaries of what they have and [who] want more [functionality]," he said. Levels of security will continue to get tighter, the amount of trust on machines will shrink and the number of container providers will consolidate to a couple of established favorites, he predicted. The next generation of containers might look more like VMs, which will offer more or less the same concepts but in a different way and with different tradeoffs.

No matter how container technology evolves, we'll see more of it. Analyst firm Gartner predicted that enterprise container use will increase to as high as 50% adoption in 2020, more than double the 20% recorded in 2017.

This was last published in June 2018

Dig Deeper on Managing Virtual Containers

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

How is your IT organization using containers today?
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

TheServerSide.com

SearchCloudComputing

DevOpsAgenda

Close