BACKGROUND IMAGE: iSTOCK/GETTY IMAGES

This content is part of the Essential Guide: Learn DevOps techniques in a home lab
Definition

Kubernetes

Contributor(s): Kathleen Casey; Alan Earls; Adam Hoffman

Kubernetes, also referred to as K8s, is an open source system used to manage Linux Containers across private, public and hybrid cloud environments. In other words, Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.

Kubernetes automates the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes. Containers run on top of a common shared operating system (OS) on host machines but are isolated from each other unless a user chooses to connect them.

Kubernetes is mainly used by application developers and IT system administrators -- including DevOps engineers -- in organizations that deploy containers.

Common Kubernetes Terms

Cluster: The foundation of Kubernetes engine. Containerized applications run on top of clusters. It is a set of machines on which your applications are managed and run.

Node: Worker machines that make up clusters.

Pod: Groups of containers that are deployed together on the same host machine.

Replication Controller: An abstract used to manage pod lifecycles.

Selector: A matching system used for finding and classifying specific resources.

Label: Value-pairs used to filter, organize and perform mass operations on a set of resources.

Annotation: A label with a much larger data capacity.

Ingress: An application program interface (API) object that controls external access to services in a cluster -- usually HTTP. It offers name-based virtual hosting, load balancing and Secure Sockets Layer termination.

Infrastructure of Kubernetes

Pods are comprised of one or multiple containers located on a host machine, and they can share resources. Kubernetes finds a machine that has enough free compute capacity for a given pod and launches the associated containers. To avoid conflicts, each pod is assigned a unique IP address, enabling applications to use ports.

A node agent, called a kubelet, manages the pods, their containers and their images. Kubelets also automatically restart a container if it fails. Alternatively, Kubernetes APIs can be used to manually manage pods.

Kubernetes controllers manage clusters of pods using a reconciliation loop to push for a desired cluster state. The Replication Controller ensures that the requested number of pods run to the Kubernetes user's specifications. It can be used to create new pods if a node fails, or to manage, replicate and scale up existing pods.

A visualization of the Kubernetes ecosystem
This graphic provides a brief overview of the Kubernetes ecosystem

The Replication Controller scales containers horizontally in Kubernetes. It ensures there are more or fewer containers available as the overall application's computing needs fluctuate. In other cases, a job controller can manage batch work, or a DaemonSet controller may be implemented to manage a single pod on each machine in a set.

Other Kubernetes infrastructure elements include:

Security. The Master node runs the Kubernetes API and controls the cluster. It serves as part of the control plane, managing communications and workloads across clusters.

A node, also known as a minion, is a worker machine in Kubernetes. It can be either a physical machine or a virtual machine (VM). Nodes have the necessary services to run pods and receive management instructions from master components. Services found on nodes include Docker, kube-proxy and kubelet.

Security is broken up into four layers: Cloud DataCenter, Cluster, Container and Code. Stronger security measures continue to be created, tested and implemented regularly.

Telemetry. Other important Kubernetes components to know include service, which is an automatically configured load balancer and integrator that runs across the cluster; and labels, which are key/value pairs used for service discovery. A label tags the containers and links them together into groups.

Networking. Kubernetes is all about sharing machines between applications. As each pod gets its own IP address, this creates a clean, backward-compatible model. Pods can be treated like VMs in terms of port allocation, naming, service discovery, load balancing, application configuration and migration.

Registry. There is a direct connection between Amazon Elastic Container Registry (Amazon ECR) and Kubernetes. Each user in the cluster who can create pods will be able to run any pods that use any images in the ECR registry.

Challenges of using Kubernetes

Kubernetes often requires role and responsibility changes within an existing IT department as organizations decide which storage model to deploy: whether to use a public cloud or on-premises servers.

Load scaling proves to be a primary challenge, yet additional data shows that reliability and security are further challenges presented by Kubernetes deployment. Larger organizations experience different challenges than smaller ones, and these vary depending on the number of employees, scalability and infrastructure.

Kubernetes Competitors

Kubernetes vs. Docker

Kubernetes faces competition from other scheduler and orchestrator technologies, such as Docker Swarm, which is a standalone container orchestration engine. While Kubernetes is used to manage Docker containers, it has also competed with the native clustering capabilities of Docker Swarm.

Docker Swarm is an easy-to-use orchestrator for many network engineers, offering lower barriers to entry and fewer commands than Kubernetes. Swarm users are encouraged to use Docker infrastructure, but are not blocked from using other infrastructures.

In recent years, the two containerization technologies have begun to operate most efficiently when used together. Docker allows for running, creating and managing containers on a single operating system. With Kubernetes, containers can then be automated for provisioning, networking, load-balancing, security and scaling across nodes from a single dashboard.

Mesos

Mesos emphasizes running containers alongside other workloads, and it easily integrates with machine learning and big data tools such as Cassandra, Kafka and Spark. Mesos provides the following accomplishments:

  • They have launched a marketplace for open source services.
  • They have partnered with major vendors, such as Hewlett Packard Enterprise, Microsoft and Dell EMC.
  • They are adding pods as a feature.

Mesosphere existed prior to widespread interest in containerization and is therefore less focused on running containers. Kubernetes exists as a system to build, manage and run distributed systems, and it has more built-in capabilities for replication and service discovery than Mesosphere.

Kubernetes vs. Jenkins

Jenkins and Kubernetes are both open source tools. Jenkins is a continuous integration server tool that offers easy installation, easy configuration and change set support. Developers often consider Jenkins due to these benefits, as well as internal hosting capabilities.

Kubernetes, as a container tool, is more lightweight, simple and accessible. It is built for a multi-cloud world, whether public or private based. As it is the leading Docker container management solution, its open source power and simplicity lead developers to opt for Kubernetes.

Overall, Kubernetes may be the most developed of the three systems in many situations, and it can be adopted as the upstream, open source Kubernetes or as a proprietary, supported version. Kubernetes was designed from its inception as an environment to build distributed applications in containers.

Kubernetes support and enterprise product ecosystem

As an open source project, Kubernetes underpins several proprietary distributions and managed services from cloud vendors.

Red Hat OpenShift is a container application platform for enterprises based on Kubernetes and Docker. The offering targets fast application development, easier deployment and automation, while also supporting container storage and multi-tenancy.

CoreOS Tectonic is a Kubernetes-based container orchestration platform that claims enterprise-level features -- such as stable operations, access management and governance.

Other examples of Kubernetes distributions for production use include Rancher from Rancher Labs; the Canonical Distribution of Kubernetes from Ubuntu; and public cloud-based Kubernetes tie-ins, such as Azure Kubernetes Service and Google Kubernetes Engine (GKE).

Mirantis is another example of an open source product ecosystem based on Kubernetes that can be used for the internet of things (IoT). The product is billed to manage IQRF networks and gateways for IoT applications -- such as smart cities.

History of Kubernetes

Kubernetes was originally created by Google, with version 1.0 launched in 2015. It was inspired by the company's Borg data center management software.

In the past, organizations ran applications on physical servers, with no way to define resource boundaries, leading to resource allocation issues.

As a solution, virtualization was introduced. This allows multiple virtual machines to operate at the same time on a single server's CPU. Applications can be isolated between VMs and receive increased security since they cannot be readily accessed by others.

Containers are like virtual machines but with relaxed isolation properties. Like a VM, a container has its own properties, such as a file system, CPU, memory and process space. Containers are becoming more popular due to their ability to be created, deployed and integrated quickly across diverse environments.

Future outlook

Kubernetes has recently introduced a stable release of version 1.16, which focuses on resources, volume and metrics.

Even though Google introduced the technology, Kubernetes has major contributors from various corners of the container industry. Anyone can attempt to contribute to the Kubernetes project via one or more Kubernetes special interest groups because it is open source. Top corporations that commit code to the project include Red Hat, Rackspace and IBM.

In the years following Google's release of Kubernetes, companies in the IT vendor landscape have developed support and integrations for the management platform, but work remains. Community members still attempt to fill gaps where vendor integration does not exist with open source tools.

The Cloud Native Computing Foundation (CNCF) -- which also has members such as Docker and Amazon Web Services (AWS) -- hosts Kubernetes. Kubernetes adopters range from cloud-based document management service Box to telecom giant Comcast, as well as enterprises such as SAP's Concur Technologies and startups like Barkly Protects.

This was last updated in November 2019

Continue Reading About Kubernetes

Dig Deeper on IT Ops Implications of Continuous Delivery

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

How has Kubernetes helped your Docker container experience?
Cancel
How has Kubernetes helped your Docker container projects?
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

Powered by:

SearchSoftwareQuality

SearchAppArchitecture

SearchCloudComputing

SearchAWS

TheServerSide.com

SearchDataCenter

SearchServerVirtualization

Close