In increasingly abstracted compositions of applications and infrastructure, relationships are more important than ever. How a DevOps deployment abstracts both goals and resources is critical.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
One of the strengths of container orchestration system Kubernetes is that it abstracts deployment with Nodes and Pods. These elements work in the context of clusters, Deployments and Services. Admins must understand the relationships between them to fully understand Kubernetes Pods and Nodes.
An application comprises a set of related components threaded together via workflows. Each of these components is critical to the application and must be deployed on infrastructure that matches its hosting and connection requirements. The application won't run when deployed and can't be restored if its workflow-based component relationships are lost. Kubernetes Pods capture this relationship information.
Kubernetes Pods are virtual structures that hold a set of colocated containers. From a systems management perspective, each Pod looks like a single server, with an IP address and one set of ports, hosting multiple Linux containers. The number of containers within a Pod is not visible from this outside perspective, as the Pod abstracts the application at only one level. Pods contain all the environmental variables that the components they represent need, and so a Pod is a complete unit of deployment.
The mechanics to deploy Kubernetes Pods
Kubernetes Pods deploy on Nodes, a kind of logical machine. A Node is a hosting point for a container and may be a VM or bare metal. One or many Pods can deploy per Node. Nodes are typically grouped into clusters that represent pools of resources that cooperate to support applications. Kubernetes redeploys Nodes within a cluster if a Node breaks. Nodes contain a Kubernetes runtime agent that manages the container orchestrator's tasks and also a runtime for the container system, such as Docker or rkt.
A Kubernetes deployment starts with clusters of Nodes. Each cluster is a resource pool for a set of cooperative applications: They exchange information via workflow or shared databases, or they support a cohesive set of business functions. Cluster servers are akin to virtualization or cloud resource pools. They consist of Nodes that are mapped to Pods at deployment. Each cluster has a master Node that coordinates Kubernetes processes across the cluster.
Deployment is the process of assigning Pods to Nodes in a cluster. Policies determine things like how Nodes are shared and how many replicas of a Pod Kubernetes creates for redundancy. Nodes in a cluster are general resources for the applications associated with it, but it's possible to reserve -- or, in Kubernetes parlance, taint -- a Node to limit its use to a particular set of Pods. After Kubernetes Deployment assigns Pods to Nodes, it activates the Node container management system to deploy based on the parameters and then waits. The result isn't yet functional.
When Deployment occurs, Kubernetes Pods are there but invisible, not exposed yet to the outside world. Kubernetes and the container management system recognize the Pods, but there's no address mapping for the IP addresses and ports. Services provide that exposure.
A Service is a collection of Pods within a cluster that includes instructions on how the Pods' functionality is accessed within the production deployment. Service definitions can expose functionality within a cluster only -- the default -- or externally on a virtual private network (VPN) or the public internet. The administrator can also define the Service to specify an external load balancer that divides work across a set of Pods.
Pods like containers
Kubernetes Pods are pools of containers mapped to an application. Don't put containers that aren't closely related to each other into the same Pod or put Nodes that supported unrelated applications into the same cluster. Discipline in relationship assignments is critical to making Kubernetes work in production.
A Pod represents the containers that should be hosted as a unit. If anything in a Pod breaks, administrators will recover the entire Pod. If two unrelated applications share the same Pod, a failure could break both. Similarly, replicating a Pod replicates everything in it.
Containers normally have host-private IP addresses, which restrict container communication to the single shared machine: a Pod, in Kubernetes. Kubernetes gives every Pod its own cluster-private IP address so that all Pods within a cluster communicate by default. Users can alternatively map some cluster-private addresses to public IP addresses, such as the corporate VPN. To define microservices or components that will be shared among applications, define them as their own cluster and Service, and expose them as public addresses.
Pods and Nodes do not define Kubernetes. Kubernetes is really about clusters and Services, so think of Pods and Nodes with clusters and Services in mind.
Kubernetes as a service simplifies the path to containers
Open source projects spring up around Kubernetes
Brush up on container rules for production