agsandrew - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Deploy containers in production with clustering

Deploying containers in production is no easy task. It requires automation and a way to manage complexity. Orchestration tools such as Google Kubernetes bridge the gap from dev to production.

Deploying one or two containers on a laptop during development is easy; building a production-grade application without any additional tooling is a different story.

Most containerized applications in production comprise multiple containers spread across a number of physical and/or virtual hosts. Complexity increases with automated container deployments, and when IT shops must scale containerized applications to accommodate additional demand.

To deploy containers in production, IT teams must find a way to automate, scale and manage complex architectures. Container orchestration tools make life easier.

Container orchestration and clustering capabilities

Live applications need support for load balancing, rolling updates, monitoring and auto scaling. Container orchestration tools like the open source Google Kubernetes bridge the gap from development to production.

There's a learning curve when it comes to implementing container orchestration tools. The architecture can initially seem complex. But production-grade containerized applications require a capable management system to deploy containers.

Container orchestration tools manage complex applications comprised of multiple containers running on a cluster of host machines. IT teams deploy containers to physical host machines or virtual machines as an additional abstraction layer, running on hardware on premises or in the public cloud.

There's a learning curve when it comes to implementing container orchestration tools. The architecture can initially seem complex. But production-grade containerized applications require a capable management system to deploy containers. Making an investment in orchestration and clustering technology, such as Kubernetes, can set up a team with the agility to move from on-premises data centers to the cloud.

Kubernetes is considered one of the most mature container orchestration tools, with the most features for enterprise-level container deployments. Kubernetes is made up of several building block components:

  • Controllers are systems used to manage a container cluster. They ensure that all of the systems deployed within the cluster are configured properly and adhere to a desired state.
  • Nodes, also referred to as minions, are the physical or virtual machines running the Docker Engine for containers. They can host various containers running within a managed cluster. A master controller manages nodes, exchanging messages using ETCD, a key/value management and communication service.
  • Pods are a collection of one or more containers deployed on a node. When a pod is comprised of multiple containers, the cluster controller guarantees that each of those containers resides on the same node. Containers within each pod share resources, like disk volumes, if needed. Each pod is assigned a unique IP address within the cluster, which eliminates port conflict issues with other pods running on the same node.
  • Labels and selectors identify objects in the system. Labels are simply key/value pairs that identify the components within the system. Once the administrator sets up objects with labels, they can use selectors to query all of the objects that share the same label. They can also take actions on a group of components sharing the same label. For example, using single command, a developer can shut down all of the containers labeled with a particular version number.
  • Services are a critical piece of the container-deployment puzzle. Services are endpoints that enable multiple pods running across different nodes to work together to power a complete application; the pods within a service are defined using label selectors. Because all containers within a pod are guaranteed to run on the same node, availability increases when the application uses multiple pods distributed across multiple underlying hosts. Services provide built-in load balancing for container deployments so that requests are distributed across each of the pods in a round-robin fashion.

Key benefits to running Kubernetes

Kubernetes was initially developed by and run inside Google for several years before it went open source. The container cluster manager is backed by a large community of developers that moves the project forward on a regular basis. You can run Kubernetes anywhere without worrying about investing in a proprietary technology that may be specific to an on-premises deployment or public cloud platform.

Teams can run Kubernetes on premises, on physical servers or virtual servers, and in a hybrid deployment. This also creates the opportunity to run it in hybrid deployments. And because it runs on VMs, Kubernetes deploys containers on servers running within Amazon Web Services, Microsoft Azure, Google Compute Engine or any other infrastructure as a service cloud provider.

Next Steps

Compare IaaS cloud providers' container services

Puppet pulls strings for container management

AWS Blox moves into container orchestration space

This was last published in January 2017

Dig Deeper on Managing Virtual Containers

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What's the best way to deploy containers: On VMs or alone?
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

Close