Containerization adopters know Kubernetes as a popular tool that automates deployment, scaling and management of stateless, nonpersistent containerized workloads. The better they know how it works internally, the better they can use it successfully.
"Kubernetes is especially great for elastic, dynamically scalable workloads," noted Marc Fleischmann, CEO of Datera, a storage-as-a-service software vendor in Sunnyvale, Calif.
Kubernetes' configuration is set up to orchestrate containers through core concepts:
Kubernetes's configuration, originally created by Google, has its origin in Google's long history with containers, noted Ali Hussain, CTO of Flux7, an Austin, Texas-based IT consultancy. This is visible in many of the architectural decisions in Kubernetes, such as the deployment of containers in Pods to allow for locality between certain workloads, he explained. As these kinds of platforms mature, Kubernetes has consistently stayed at the vanguard, Hussain said. For example, Kubernetes added support for batch jobs a year before Amazon Web Services' EC2 Container Service. Likewise, he said, Docker recently added support for secrets, while Kubernetes has had native secret management for years.
Kubernetes users must properly apply these configuration elements to overcome and bypass orchestration challenges.
Kubernetes configuration and relationships
The control plane and data plane make up the two fundamental aspects of Kubernetes. The control plane focuses on the overall management of an infrastructure deployment, handling how to scale that environment, provide access and govern the entire system. The other primary area of concern -- the data plane -- is where Kubernetes deploys containers onto the Node servers.
From a management and operations perspective, the control plane's focus is on high availability so that there is no damaging effect on the deployment plan or app support where the Node servers -- "the workers within Kubernetes" -- operate, said Mike Lehmann, vice president of product management at Oracle.
Below the level of the data plane, the Kubernetes configuration for applications is governed by Pods, the collections of containers in Kubernetes where application code executes, Lehmann said.
To operate containers in Pods, those Pods need resources. Operations teams must allocate resources to Pods -- sometimes across multiple physical machines. This is where Kubernetes relies on Nodes, Schedulers and Volumes, Hussain said.
A Node is basically a computer provisioned to run container Pods. Node is common beyond Kubernetes to describe a configuration of physical IT resources. The Pod, by contrast, is unique in Kubernetes cluster setups.
"In other platforms, you deploy with containers in tasks, but a task is just one type of container in Kubernetes, and when you deploy a Pod, it can contain multiple containers," Hussain said.
Schedulers take the information on what operations wants to deploy and finds compute resources on the Nodes to run it. In this relationship, Kubernetes balances the tasks' needs based on type of workload, availability, cost, configuration, security and dependencies, among other factors. For example, Hussain said, a web application is a real-time workload that must maintain enough resources to handle its current requests, as well as some room to grow. On the other hand, a batch workload is not as time-sensitive and performs tasks from a queue. Kubernetes' configuration includes a built-in scheduler to handle these differences, or users can implement a custom scheduler for workloads with specific needs.
Volumes are another useful concept in Kubernetes cluster setups and operations. Containers are immutable, which helps improve compliance, consistency and security in IT deployments, because no changes can occur on containers in production. However, in a failure, the data in a container is lost. Kubernetes Volumes mount with Pods to give containers persistent storage for as long as the Pod is active.
Two other useful elements, Lehmann noted, are the controller manager and kubelets. The Kubernetes controller manager belongs to the core controllers; the process enables them to communicate with the tool's API server to manage Pods and other elements of the configuration.
Kubelets track the state of each Node and start and stop application containers within Pods. The control plane reaches into a given kubelet and pulls out configuration information, Lehmann said.
Finally, the proxy service is a critical piece of the overall Kubernetes configuration, Lehmann said. The proxy service provides the ability to plug into external parties and route between internal and external containers. In Kubernetes, kube-proxy routes traffic to the appropriate container based on the IP address and port number of the incoming request.
Kubernetes' version 1.8 update makes role-based access control for security generally available
Stay on top of governance, risk management and compliance with configuration management software
Follow this tutorial guide to improve DevOps techniques