Kubernetes was originally created by Google, with version 1.0 launched in 2015. It was inspired by the company's Borg data center management software.
Kubernetes automates the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes. Containers run on top of a common shared operating system (OS) on host machines, but are isolated from each other unless a user chooses to connect them.
Kubernetes is mainly used by application developers and IT system administrators, including DevOps engineers, in organizations that deploy containers.
Important Kubernetes features
Kubernetes can be used with container runtimes from Docker and CoreOS rkt, as well as with the Container Runtime Interface using Open Container Initiative runtimes (CRI-O).
It contains tools for orchestration, secrets management, service discovery, scaling and load balancing. Kubernetes technology includes automatic bin packing to place containers with the optimal resources for the job, and it applies configurations via configuration management features. It protects container workloads by rolling out or rolling back changes and offers availability and quality checks for containers -- replacing or restarting failed containers.
As requirements change, the user can move container workloads in Kubernetes from one cloud provider or hosting infrastructure to another without changing the code.
With Kubernetes, containers run in Pods, which are the basic scheduling unit for Kubernetes, and which add a layer of abstraction to containers. Pods are comprised of one or multiple containers located on a host machine, and they can share resources. Kubernetes finds a machine that has enough free compute capacity for a given Pod and launches the associated containers. To avoid conflicts, each Pod is assigned a unique IP address, enabling applications to use ports.
A node agent, called a kubelet, manages the Pods, their containers and their images. Kubelets also automatically restart a container if it fails. Alternatively, Kubernetes APIs can be used to manually manage Pods.
Kubernetes controllers manage clusters of Pods using a reconciliation loop to push for a desired cluster state. The Replication Controller ensures that the requested number of Pods run to the Kubernetes user's specifications. It can be used to create new Pods if a node fails, or to manage, replicate and scale up existing Pods.
The Replication Controller scales containers horizontally in Kubernetes. It ensures there are more or fewer containers available as the overall application's computing needs fluctuate. In other cases, a job controller can manage batch work, or a DaemonSet controller may be implemented to manage a single Pod on each machine in a set.
The Master node runs the Kubernetes API and controls the cluster. It serves as part of the control plane, managing communications and workloads across clusters.
A node, also known as a minion, is a worker machine in Kubernetes. It can be either a physical machine or a virtual machine. Nodes have the necessary services to run Pods and receive management instructions from master components. Services found on nodes include Docker, kube-proxy and kubelet.
Other important Kubernetes components to know include labels, which are key/value pairs used for service discovery; and Service, which is an automatically configured load balancer and integrator that runs across the cluster. A label tags the containers and links them together into groups.
Kubernetes faces competition from other scheduler and orchestrator technologies, such as Docker Swarm and Mesosphere DC/OS. While Kubernetes is sometimes used to manage Docker containers, it also competes with the native clustering capabilities of Docker Swarm.
Docker Swarm is an easy-to-use orchestrator for many network engineers, offering lower barriers to entry and fewer commands than Kubernetes. Swarm users are encouraged to use Docker infrastructure, but are not blocked from using other infrastructures.
Mesosphere emphasizes running containers alongside other workloads, and it easily integrates with machine learning and big data tools such as Cassandra, Kafka and Spark. Mesosphere has launched a marketplace for open source services; has partnered with major vendors, such as Hewlett Packard Enterprise, Microsoft and Dell EMC; and is adding Pods as a feature.
Kubernetes may be the most developed of the three systems in many situations, and it can be adopted as the upstream, open source Kubernetes or as a proprietary, supported version. Kubernetes was designed from its inception as an environment to build distributed applications in containers.
Mesosphere existed prior to widespread interest in containerization, and is therefore less focused on running containers. Kubernetes exists as a system to build, manage and run distributed systems, and it has more built-in capabilities for replication and service discovery than Mesosphere.
In many situations, Kubernetes outperforms Docker Swarm's efforts to manage a machine cluster from a single Docker API.
Kubernetes support and enterprise product ecosystem
As an open source project, Kubernetes underpins several proprietary distributions and Kubernetes managed services from cloud vendors.
Red Hat OpenShift is a container application platform for enterprises based on Kubernetes and Docker. The offering targets fast application development, easier deployment and automation, while also supporting container storage and multi-tenancy.
CoreOS Tectonic is a Kubernetes-based container orchestration platform that claims enterprise-level features, such as stable operations, access management and governance.
Other examples of Kubernetes distributions for production use include Rancher from Rancher Labs; the Canonical Distribution of Kubernetes from Ubuntu; and public cloud-based Kubernetes tie-ins, such as Microsoft Azure Container Service and Google Container Engine.
Mirantis is another example of an open source product ecosystem based on Kubernetes that can be used for the internet of things (IoT). The product is billed as a way to manage IQRF networks and gateways for IoT applications, such as smart cities.
Kubernetes is currently at stable release version 1.7, which focuses on security and stateful application support in containers. As a newer and rapidly evolving technology, Kubernetes will see many features developed and moved through alpha to beta to -- eventually -- become stable components.
Even though Google introduced the technology, Kubernetes has major contributors from various corners of the container industry. Anyone can attempt to contribute to the Kubernetes project via one or more Kubernetes special interest groups because it is open source. Top corporations that commit code to the project include Red Hat, Rackspace and IBM, among others.
In the years following Google's release of Kubernetes, companies in the IT vendor landscape have developed support and integrations for the management platform, but work remains. Community members still attempt to fill gaps where vendor integration does not exist with open source tools. Docker and other competitors are also developing and improving container management, orchestration and scheduling technologies.
The Cloud Native Computing Foundation, which also has members such as Docker and Amazon Web Services (AWS), hosts Kubernetes. Kubernetes adopters range from cloud-based document management service Box to telecom giant Comcast, as well as enterprises such as SAP's Concur Technologies and startups like Barkly Protects.