bluebay2014 - Fotolia

The future of containerization goes more abstract

Containers have benefits over VMs in the data center, but the future of containerization is broader than that.

Virtual machines were once the ultimate means of easily creating and provisioning software packages into production. VMs seemed pretty clever, but in actual use, they require far more base resources than they really should -- and aren't always easy to manage. Although they still have a part to play in the tool set of an IT department, we're in need of a more optimized approach.

Docker containerization promises a more efficient means to wrapper and provision systems. Much of the base-level resources are shared, using one OS across all instances. Virtual containers save resources over VMs, and simplify OS maintenance; at least in theory. IT containerization needs to clear this hurdle in the near future.

Containers fall short

The future of containerization is wide open -- at least if it can overcome its myriad of hangups.

First-generation virtual containers, the majority of which are Docker containers, all use one host OS. A VM runs on any OS on top of a hypervisor layer, but containers must be compatible with the host OS they share. This is fine if all your workloads are Linux- or Windows-based, but it requires two or more base platforms for a hybrid environment.

System containerization is hardly the end point for containerization.

Certain applications require specific OS versions or specific patches, and something has to be done to ensure that the container can access it. The app container must hold extra payloads that deal with this problem through call redirection. A call that would normally go to the monolithic, single-instance underlying OS in a containerized environment must reroute to a specific capability managed within the container. Call rerouting causes incompatibilities, complicates management of the total OS and adds latency and other issues.

IT containerization also causes security problems. A compromised VM has little capability to compromise its neighbors. Containers, despite the name, don't have that contained aspect. A clever black hat hacker could use a badly configured container to gain admin privileges to the underlying OS and wreak absolute havoc. This security gap quickly led to changes in how Docker and its surrounding ecosystem operate. For example, Canonical's LXD, while able to create its own containers based on Linux container (LXC) capabilities, can improve Docker's security and capabilities by adding features such as greater granularity and depth of security.

System containers, as provided by operating system virtualization software such as Virtuozzo, use lightweight container architectures but with a proxy namespace inserted between the container and underlying OS. The abstracted physical aspects of the underlying platform now appear unique; each container's able to operate against a different build level or version of the underlying OS, as the proxy namespace manages all the necessary redirects in real time. It resolves the security and call redirection issues, but system containerization is hardly the end point for containerization. As with all technologies, the future of containerization continues to evolve.

Outside the box

CoreOS's rkt engine for Linux, a Docker alternative, is moving to arguably the right direction for the future of containerization. Even with Docker, LXC and Virtuozzo combined for container deployments, there is still a high level of redundancy in how systems are provisioned and run. Rkt is a standards-based system on a single executionable, rather than a system daemon. Running as more of an orchestration system, rkt is written to easily identify, verify, fetch, combine and run containers dynamically.

Consider a different approach -- the virtual containers become sets of metadata that just describe the end system. The orchestration system can read this metadata and then identify if there is an already provisioned service that can meet some of the required needs. Orchestration capabilities exist today in Google Kubernetes, CA Automic, HashiCorp Nomad, Apache Mesos or technology from another provider. Containerization users can pull together multiple available functions and finally provision code to fill any gaps and glue together the various services, resulting in a highly resource-efficient system.

This works well where idempotent systems have taken root. Define what a desired outcome is and use automation to make it so; metadata-based containers could help define whether such systems succeed or not.

There are problems. Can idempotency really understand the desired end result? Can resources be managed dynamically enough that existing live functions and services can be tasked with servicing another set of calls without failing? How can such overall systems be priced and charged commercially?

Serverless computing, as evidenced by Amazon Web Services Lambda or Microsoft Azure Functions, could point to a successful model: Back-end orchestration systems manage resource issues, while front-end orchestration systems manage functional idempotency.

As the future of containerization continues to grow, choose a strategy carefully, and ensure that it accounts for possible changes in how virtual containers are operated.

Next Steps

Approach IT containerization management with open eyes

Should Configuration management tools tame containers?

The container middle ground between IaaS and PaaS

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close