BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
It's never been easy to deploy and maintain application software, but the explosion of componentized multi-piece applications has made these tasks even harder. Container deployment can help simplify these tasks by abstracting application components from the OS. A container is a package that contains the application code and the middleware it requires, designed to be deployed as a unit on a server OS.
Containers are a perfect partner for microservices and other strategies that break applications into components. When an application is built from components, the orderly workflow among the components often depends on having compatible versions of software and middleware. Virtualization in the form of hypervisors and VMs offers businesses everything for an application component in one place and the ability to deploy wherever it makes sense, but because each VM contains an entire OS, middleware and application combination, it's possible for the software version in one VM to diverge from the version in others, creating an incompatibility that interferes with integration of the workflow. The duplication consumes more resources than necessary, given that many applications on the same server use copies of the same OS version rather than sharing one.
Containers are a lightweight alternative to hypervisor-based virtualization; all the container components share an OS. You can put five to 10 times as many containers on a given server as VMs. Containers are sweeping both data centers and the cloud because they provide increased efficiency in resource usage and operations -- a killer combination of benefits. Because containers don't duplicate the OS and middleware for each app component, they also reduce the risk that version differences will disrupt integration.
In theory, any kind of business can use containers, from small companies to enterprises. Most use is concentrated in larger firms for the simple reason that containers -- a strategy to improve application and resource efficiency -- are more valuable for environments with a lot of applications and servers. Containers do require skills related to how teams manage virtual environments and use automated deployment tools -- skills more common among enterprise workers. Still, the benefits are significant enough that any company that supports workers with multiple applications should seriously consider container deployment.
The container ecosystem
The many software products in the container ecosystem are all designed to manage applications, their components and the resources they connect to and on which they deploy. The heart is the container software itself, which builds on a virtualization and resource isolation facility in the OS. From this basic container software, a growing ecosystem adds features to facilitate application deployment and resource control.
All container systems are a kind of halfway point between VMs and simple general-purpose server OSes. Hypervisors create virtual servers by partitioning the resources of real servers. Every VM looks like an independent server, which means it isolates applications from each other and lets every application pick its own OS and middleware. With a multitasking OS, there is no application isolation, and since all applications share the same OS and middleware, they all have to work with the same version of everything, which is often a problem.
Containers run everything under one OS, but applications are still almost as isolated from each other as they are with VMs. Because all of the containers share the OS, they have a much lower overhead than VMs. This leads to the five- to 10-time improvement in the density mentioned above. In addition, containers enforce a simple model of application deployment and redeployment, lowering operations costs. In all, containers offer advantages over both VMs and multitasking OSes. They lower costs more effectively than VMs. They can also improve security better than multitasking OSes.
Docker is the premier container software platform. The purpose of container software tools like Docker is to wrap basic container support in additional features to enhance security and deployment capabilities. To deploy applications in containers, users can either exploit the basic features of container software or turn to add-on tools. These supplementary tools are particularly valuable to manage large numbers of containerized applications and deploy them on a pool of resources rather than specific assigned servers. Docker has grown over time to include even more container deployment features, resulting in some overlap between container software features and the primary add-on tools, which aim specifically at deployment.
Kubernetes, a container orchestration tool originally developed by Google, has become almost as popular as Docker, and it is the most well-known of the family of container management tools and container orchestration tools.
Container applications and resources
Container management, or orchestration, enables administrators to deploy applications on pools of resources, in public cloud or a data center, without handling the details of container hosting manually. These resource pools are called swarms in Docker and clusters in Kubernetes. There are also tools available for cluster management that include, but are not limited to, containers -- Apache Mesos and CoreOS are examples -- and enterprises with a large-scale container deployment and complex applications should review these options. For other adopters, whether container deployment is on public cloud or in the data center, cluster or discrete host, orchestration tools are likely the best option.
Buyers should understand, first and foremost, that a container strategy equals container software, like Docker, plus container orchestration, like Kubernetes. Container orchestration tools, since they have to handle deployment on resources, are where the big differences between public cloud and data center container use focus. The most popular container orchestration tools have both data center and public cloud implementations.
Major public cloud providers, such as Amazon Web Services (AWS), Google and Microsoft, offer services for cloud-hosted containers. The more your container deployment involves public cloud hosting, the more seriously you should consider these services. AWS has the most popular container toolkit, and AWS' Elastic Container Service supports only Docker.
In effect, it's wise to begin your container planning with popular software, such as Docker and Kubernetes, and either expand or replace these widely used elements only when you need to. Many of the popular container tools are packaged versions of the primary tools of Docker and Kubernetes. One reason for this is the fact that most popular container tools available are open source, meaning that they'll cost you nothing to acquire but will come with no support. Vendors, including VMware and Red Hat, bundle tools into complete packages and add support at a cost, which makes them attractive to organizations that have limited staff skills and difficulty acquiring them from the local labor pool.
Support is available for containers, so that shouldn't be a barrier to consideration of container adoption. If you have a large number of applications and are concerned about resource costs and deployment costs, then you're a container prospect. If you're a container prospect, you'll probably need a container software package, like Docker, and perhaps an orchestration tool, like Kubernetes. The right tools and the right way to assemble them will depend on your skills, application and container deployment targets, and your specific application needs.
With extensive research into container management software, TechTarget editors focused this series of articles on vendors that provided the following functionalities: orchestration, container networking and hybrid cloud portability. We are featuring vendors that either offer leading-edge unique technology or hold significant market share or interest from enterprises. Our research included Gartner and TechTarget surveys.