Definition

Kubernetes

Contributor(s): Kathleen Casey; Alan Earls; Adam Hoffman

Kubernetes, also referred to as K8s, is an open source system used to manage Linux Containers across private, public and hybrid cloud environments. In other words, Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.

Content Continues Below

Kubernetes automates the deployment, scaling, maintenance, scheduling and operation of multiple application containers across clusters of nodes. Containers run on top of a common shared operating system (OS) on host machines but are isolated from each other unless a user chooses to connect them.

Kubernetes is mainly used by application developers and IT system administrators -- including DevOps engineers -- in organizations that deploy containers.

Benefits of Kubernetes

Kubernetes enables users to schedule, run and monitor containers, typically in clustered configurations, and automate related operational tasks. These include:

  • Continuously check container health, restart failed containers and remove unresponsive ones.
  • Perform load balancing to distribute traffic across multiple container instances.
  • Handle varied storage types for container data, from local storage to cloud resources.
  • Set and modify preferred states for container deployment. Users can create new container instances, migrate existing ones to them and remove the old ones.
  • Add a level of intelligence to container deployments, such as resource optimization --identify which nodes are available and which resources are required for containers, and automatically fit containers onto those nodes.
  • Manage passwords, tokens, SSH keys and other sensitive information.

Kubernetes infrastructure: How does it work?

Here's a quick dive into Kubernetes container management, its components and how it works:

 

This video explains what Kubernetes is and how it works.

Pods are comprised of one or multiple containers located on a host machine, and the containers can share resources. Kubernetes finds a machine that has enough free compute capacity for a given pod and launches the associated containers. To avoid conflicts, each pod is assigned a unique IP address, enabling applications to use ports.

A node agent, called a kubelet, manages the pods, their containers and their images. Kubelets also automatically restart a container if it fails. Alternatively, Kubernetes APIs can be used to manually manage pods.

Kubernetes controllers manage clusters of pods using a reconciliation loop to push for a desired cluster state. The Replication Controller ensures that the requested number of pods run to the user's specifications. It can be used to create new pods if a node fails, or to manage, replicate and scale up existing pods.

Components of a Kubernetes cluster.
The basic structure of a Kubernetes cluster shows the master, which creates and schedules pods; nodes that host one or multiple pods; and several pods, which can encapsulate one or more containers.

The Replication Controller scales containers horizontally. It ensures there are more or fewer containers available as the overall application's computing needs fluctuate. In other cases, a job controller can manage batch work, or a DaemonSet controller may be implemented to manage a single pod on each machine in a set.

Other Kubernetes infrastructure elements include:

Security. The Master node runs the Kubernetes API and controls the cluster. It serves as part of the control plane, managing communications and workloads across clusters.

A node, also known as a minion, is a worker machine in Kubernetes. It can be either a physical machine or a virtual machine (VM). Nodes have the necessary services to run pods and receive management instructions from master components. Services found on nodes include Docker, kube-proxy and kubelet.

Security is broken up into four layers: Cloud DataCenter, Cluster, Container and Code. Stronger security measures continue to be created, tested and implemented regularly.

Telemetry. Other important components to know include an abstraction called "service," which is an automatically configured load balancer and integrator that runs across the cluster; and "labels," which are key/value pairs used for service discovery. A label tags the containers and links them together into groups.

Networking. Kubernetes is all about sharing machines between applications. As each pod gets its own IP address, this creates a clean, backward-compatible model. Pods can be treated like VMs in terms of port allocation, naming, service discovery, load balancing, application configuration and migration.

Registry. There is a direct connection between Amazon Elastic Container Registry (Amazon ECR) and Kubernetes. Each user in the cluster who can create pods will be able to run any pods that use any images in the ECR registry.

Common Kubernetes terms

Here are basic terms to help grasp how Kubernetes and its deployment work:

  • Cluster: The foundation of Kubernetes engine. Containerized applications run on top of clusters. It is a set of machines on which your applications are managed and run.
  • Node: Worker machines that make up clusters.
  • Pod: Groups of containers that are deployed together on the same host machine.
  • Replication Controller: An abstract used to manage pod lifecycles.
  • Selector: A matching system used for finding and classifying specific resources.
  • Label: Value-pairs used to filter, organize and perform mass operations on a set of resources.
  • Annotation: A label with a much larger data capacity.
  • Ingress: An application program interface (API) object that controls external access to services in a cluster -- usually HTTP. It offers name-based virtual hosting, load balancing and Secure Sockets Layer

Challenges of using Kubernetes

Kubernetes often requires role and responsibility changes within an existing IT department as organizations decide which storage model to deploy: whether to use a public cloud or on-premises servers. Larger organizations experience different challenges than smaller ones, and these vary depending on the number of employees, scalability and infrastructure.

  • Some enterprises desire the flexibility to run open source Kubernetes themselves, if they have the skilled staff and resources to support it. Many others will go with a package of services from the broader Kubernetes ecosystem that help make its deployment and management easier on IT teams.
  • Load scaling. Pieces of an application in containers may scale differently (or not at all) under load, a function of the application and not the method of container deployment. Organizations must factor in how to balance pods and nodes.
  • Distributing application components in containers enables flexibility to scale features up and down -- but too many distributed app components increases complexity, and can impact network latency and reduce availability.
  • Monitoring and observability. As organizations expand container deployment and orchestration for more workloads in production, it becomes harder to know what's going on behind the scenes -- and thus there is a heightened need to better monitor various layers of the Kubernetes stack, and the entire platform, for performance and security.
  • Security. Deploying containers into production environments adds many levels of security and compliance, from vulnerability analysis on code to multifactor authentication to handling multiple stateless configuration requests simultaneously. Proper configuration and access controls are increasingly important, especially as adoption widens and more organizations put it into production. Kubernetes also now has a bug bounty program, to reward those who find security vulnerabilities in the core Kubernetes platform.
    Security attack vectors in a Kubernetes architecture.
    Kubernetes security is a full-stack affair, as attackers can gain control of everything from a container to a cluster.

Kubernetes competitors

Kubernetes vs. Docker

Kubernetes faces competition from other scheduler and orchestrator technologies, such as Docker Swarm, which is a standalone container orchestration engine. While Kubernetes is used to manage Docker containers, it has also competed with the native clustering capabilities of Docker Swarm.

Docker Swarm is an easy-to-use orchestrator for many network engineers, offering lower barriers to entry and fewer commands than Kubernetes. Swarm users are encouraged to use Docker infrastructure, but are not blocked from using other infrastructures.

In recent years, the two containerization technologies have begun to operate most efficiently when used together. Docker allows for running, creating and managing containers on a single operating system. With Kubernetes, containers can then be automated for provisioning, networking, load-balancing, security and scaling across nodes from a single dashboard.

Mirantis, which acquired the Docker Enterprise business in late 2019, initially signaled its intent to focus on Kubernetes. However, it later pledged to continue to support and expand the enterprise version of Docker Swarm.

Kubernetes vs. Mesos

Apache Mesos, an open source cluster manager, emphasizes running containers alongside other workloads, utilizing pods, and it easily integrates with machine learning and big data tools such as Cassandra, Kafka and Spark. Mesosphere DC/OS, a commercialized version of Mesos maintained by D2iQ, has partnered with major vendors such as Hewlett Packard Enterprise, Microsoft and Dell EMC.

Mesosphere existed prior to widespread interest in containerization and is therefore less focused on running containers. Kubernetes exists as a system to build, manage and run distributed systems, and it has more built-in capabilities for replication and service discovery than Mesosphere. Both Mesosphere and Kubernetes provide container federation.

Kubernetes vs. Jenkins

Jenkins and Kubernetes are both open source tools. Jenkins is a continuous integration server tool that offers easy installation, easy configuration and change set support. Developers often consider Jenkins due to these benefits, as well as internal hosting capabilities.

Kubernetes, as a container tool, is more lightweight, simple and accessible. It is built for a multi-cloud world, whether public or private based. As it is the leading Docker container management solution, its open source power and simplicity lead developers to opt for Kubernetes.

Overall, Kubernetes may be the most developed of the three systems in many situations, and it can be adopted as the upstream, open source version or as a proprietary, supported distribution. Kubernetes was designed from its inception as an environment to build distributed applications in containers.

Kubernetes support and enterprise product ecosystem

As an open source project, Kubernetes underpins several proprietary distributions and managed services from cloud vendors.

Red Hat OpenShift is a container application platform for enterprises based on Kubernetes and Docker. The offering targets fast application development, easier deployment and automation, while also supporting container storage and multi-tenancy.

CoreOS Tectonic is a Kubernetes-based container orchestration platform that claims enterprise-level features -- such as stable operations, access management and governance.

Other examples of Kubernetes distributions for production use include Rancher from Rancher Labs; the Canonical Distribution from Ubuntu; and public cloud-based tie-ins, such as Amazon Elastic Kubernetes Service, Azure Kubernetes Service and Google Kubernetes Engine (GKE).

Mirantis is another example of an open source product ecosystem based on Kubernetes that can be used for the internet of things (IoT). The product is billed to manage IQRF networks and gateways for IoT applications -- such as smart cities.

Kubernetes ecosystem
A brief overview of the Kubernetes ecosystem.

History of Kubernetes

In the past, organizations ran applications on physical servers, with no way to define resource boundaries, leading to resource allocation issues. To address this, virtualization was introduced. This allows multiple virtual machines to operate at the same time on a single server's CPU. Applications can be isolated between VMs and receive increased security since they cannot be readily accessed by others.

Containers are like virtual machines but with relaxed isolation properties. Like a VM, a container has its own properties, such as a file system, CPU, memory and process space. Containers are becoming more popular due to their ability to be created, deployed and integrated quickly across diverse environments.

Kubernetes was originally created by Google, with version 1.0 launched in 2015. It was inspired by the company's Borg data center management software. Since then, Kubernetes has attracted major contributors from various corners of the container industry. Kubernetes is open source, so anyone can attempt to contribute to the Kubernetes project via one or more Kubernetes special interest groups. Top corporations that commit code to the project include Red Hat, Rackspace and IBM.

Companies in the IT vendor landscape have developed support and integrations for the management platform, but work remains. Community members still attempt to fill gaps where vendor integration does not exist with open source tools.

The Cloud Native Computing Foundation (CNCF) -- which also has members such as Docker and Amazon Web Services (AWS) -- hosts Kubernetes. Kubernetes adopters range from cloud-based document management service Box to telecom giant Comcast and financial services conglomerate Fidelity Investments, as well as enterprises such as SAP's Concur Technologies and startups like Barkly Protects.

Future outlook

Recent updates in 2019 (versions 1.14 through 1.16) added or improved several areas to further support stability and production deployment. These include:

  • Support for Windows host, and Windows-based Kubernetes nodes;
  • Extensibility and cluster lifecycle;
  • Volume and metrics; and
  • Custom resource definitions.

However, industry interest has shifted away from updates to the core Kubernetes platform, and more toward higher-level areas that enterprises can benefit from container orchestration and cloud-native applications. These include sensitive workloads that require multi-tenant security, more fluid management of stateful applications such as databases, and fostering GitOps application delivery -- version-controlled automated releases of applications and software-defined infrastructure. For example, one Kubernetes feature now in beta is volume snapshots, which create a point in time copy of a volume in the API, used to provision a new volume or restore an existing one to a prior state. Snapshots are a key functionality for many stateful workloads, such as database operations.

As organizations expand container deployment and orchestration for more workloads in production, it becomes even harder to know what's going on behind the scenes. The need to better monitor various layers of the Kubernetes stack, and the entire Kubernetes platform, for performance and security will only increase.

Markets to serve these emerging areas with third-party tools have already formed, with startups (some through the CNCF) as well as experienced vendors such as D2iQ (formerly Mesosphere). At the same time, the Kubernetes ecosystem continues to consist of dozens of Kubernetes distributions and vendors, which is likely to narrow in the future.

This was last updated in March 2020

Continue Reading About Kubernetes

Dig Deeper on IT Ops Implications of Continuous Delivery

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

How has Kubernetes helped your Docker container experience?
Cancel
How has Kubernetes helped your Docker container projects?
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

SearchSoftwareQuality

SearchAppArchitecture

SearchCloudComputing

SearchAWS

TheServerSide.com

SearchDataCenter

SearchServerVirtualization

Close