BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Nobody wants to waste money on underutilized servers. Server power has grown enormously over just the last five years, which easily could exacerbate the problem of idle resources due to dedicated application and server relationships. The answer is everything from virtualization to the cloud to containers -- and they're all related.
Virtualization is the most foundational technology of modern IT operations. A virtual entity, such as a virtual server or private network, is an abstraction that represents physical components. It behaves like dedicated resources but actually comprises a portion of shared ones. But containers bring virtualization beyond its origins, and enterprise IT shops should know container basics around isolation, portability and resource consumption when they add the technology to IT plans.
Virtualization and cloud computing
Before container basics comes container history. Physical hardware systems have an inherent risk of inefficient use and the inherent benefit of application isolation. To address inefficiency, the IT industry adopted multitasking systems, which run several applications at once, but that simple form of resource sharing doesn't separate the applications enough. One app can contaminate the performance of other apps if it behaves badly, and attackers may even be able to breach security from one app to another.
This tradeoff between isolation and efficiency is inherent in virtualization because of shared resources. Perfect security and performance management requires physical isolation in bare metal. Highest efficiency calls for multitasking OSes. Virtualization options fall between these extremes.
Virtual machines (VMs) replicate the server, with a full OS and middleware. Hypervisor software manages and runs these VMs on physical resources. Because VMs are highly separated, they enable multiple users or applications to share a server, even if those workloads come from different organizations or even companies.
Most cloud computing services are based on VMs due to their isolation traits. Applications run on VMs are largely unaffected by other workloads that share the physical server or cluster of servers, and a VM can move from one server to another easily because the machine image that runs in the VM carries everything necessary to run the application. This standardization means an on-premises data center operator can mimic the setup of public clouds, such as AWS or Microsoft Azure, and run the same machine images on premises with private cloud software.
VMs set a standard for isolation, but they only go so far to improve efficiency over physical hardware. In many implementations, all the VMs on a server run the same OS and middleware, which is a lot of duplication. VMs also require the same configuration steps as real servers, which means that they don't always reduce IT operations costs or tasks.
Containers are more efficient but less independent than VMs. Containers enable portable multitasking in IT hosting. The OS creates partitions of resources for each container, which run applications or services. The OS is shared across all the containers, although middleware is still packaged with the application. Still, one server usually can host twice as many containers as VMs.
The downside of all this efficiency is weaker isolation between containerized applications and components; containers are not as secure as VMs. Container security is improving as the hosting technique evolves. There's also a greater risk of having the behavior of a containerized app affect other apps by hogging resources or even from being written improperly for the deployment.
On public clouds, most containers run inside VMs or on bare-metal servers for improved isolation and lower overhead. As container technology improves, so too will container hosting in public cloud.
Uniform deployment in containers
While containers offer greater efficiency than VMs, the technology's value stems from the fact that containers abstract an application deployment environment, including the application network. Another key element of container basics is the presumptive deployment structure, which is imposed -- and, therefore, can be relied upon -- by tools that deploy and redeploy applications and components. This setup makes container management easier than that for VMs and physical servers.
Containers bring the virtualization trend away from strict mimicry of a server and toward a new hosting environment that is closely related to a multitasking slot in an OS. Concurrently, development and IT organizations are deploying componentized software in a highly structured framework complete with networking tools so that common orchestration and lifecycle management techniques meet the requirements for container operations. In the container management space, reduced complexity enables the higher efficiency of containers to come with lower operations costs and errors.