kran77 - Fotolia

IT pros get comfortable with Kubernetes in production

IT pros who have deployed Kubernetes in production reflect on the changes, challenges and capabilities they've seen, and predict what's next.

IT pros who've run production workloads with Kubernetes for at least a year say it can open up frontiers for IT operations within their organizations.

It's easier to find instances of Kubernetes in production in the enterprise today versus just a year ago. This is due to the proliferation of commercial platforms that package this open source container orchestration software for enterprise use, such as CoreOS Tectonic and Rancher Labs' container management product, Rancher. In the two years since the initial release of Kubernetes, early adopters said the platform has facilitated big changes in high availability (HA) and application portability within their organizations.

For example, disaster recovery (DR) across availability (AZs) in the Amazon Web Services (AWS) public cloud was notoriously unwieldy with VM-based approaches. Yet, it has become the standard for Kubernetes deployments at SAP's Concur Technologies during the last 18 months.

Concur rolled out the open source, upstream Kubernetes project in production to support a receipt image service in December 2015, at a time when clusters that spanned multiple AZs for HA were largely unheard-of, said Dale Ragan, principal software engineer for the firm, based in Bellevue, Wash.

Concur's Dale Ragan shared details of the company's use of Kubernetes in production with TechTarget in May 2016, delineating what he liked -- openness for third-party tool plug-ins -- as well as the limitations Concur was working around.

"We wanted to prepare for HA, running it across AZs, rather than one cluster per AZ, which is how other people do it," Ragan said. "It's been pretty successful -- we hardly ever have any issues with it."

Ragan's team seeks 99.999% uptime for the receipt image service, and it's on the verge of meeting this goal now with Kubernetes in production, Ragan said.

Kubernetes in production offers multicloud multi-tenancy

Kubernetes has spread to other teams within Concur, though those teams run multi-tenant clusters based on CoreOS's Tectonic, while Ragan's team sticks to a single-tenant cluster still tied to upstream Kubernetes. The goal is to move that cluster to CoreOS, as well, though the company must still work out licensing and testing to make sure the receipt imaging app works well on Tectonic, Ragan said. CoreOS has prepared for this transition with recent support for the Terraform infrastructure-as-code tool, with which Ragan's team underpins its Kubernetes cluster.

CoreOS just released a version of Tectonic that supports automated cluster and HA failover across AWS and Microsoft Azure clouds, which is where Concur will take its workloads next, Ragan said.

"Using other cloud providers is a big goal of ours, whether it's for disaster recovery or just to run a different cluster on another cloud for HA," Ragan said. With this in mind, Concur has created its own tool to monitor resources in multiple infrastructures called Scipian, which it will soon release to the open source community.

Ragan said the biggest change in the company's approach to Kubernetes in production has been a move to multi-tenancy in newer Tectonic clusters and the division of shared infrastructures into consumable pieces with role-based access. Network administrators can now provision a network, and it can be consumed by developers that roll out Kubernetes clusters without having to grant administrative access to those developers, for example.

In the next two years, Ragan said he expects to bring the company's databases into the Kubernetes fold to also gain container-based HA and DR across clouds. For this to happen, the Kubernetes 1.7 additions to StatefulSets and secrets management must emerge from alpha and beta versions as soon as possible; Ragan said he hopes to roll out those features before the end of this year.

Kubernetes in production favors services-oriented approach

Dallas-based consulting firm uses HA across cloud data centers and service providers for its clients, which it helps to deploy containers. During the most recent Amazon outage, clients had failover between AWS and public cloud providers OVH and Linode through Rancher's orchestration of Kubernetes clusters, said E.T. Cook, chief advocate for the firm.

"With Rancher, you can orchestrate domains across multiple data centers or providers," Cook said. "It just treats them all as one giant intranetwork."

In the next two years, Cook said he expects Rancher will make not just cloud infrastructures, but container orchestration platforms such as Docker Swarm and Kubernetes interchangeable with little effort. He said he evaluates these two platforms frequently because they change so fast. Cook said it's too soon to pick a winner in the container orchestration market yet, despite the momentum behind Kubernetes in production at enterprises.

Docker's most recent Enterprise Edition release favors enterprise approaches to software architectures that are stateful and based on permanent stacks of resources. This is in opposition to Kubernetes, which Cook said he sees as geared toward ephemeral stateless workloads, regardless of its recent additions to StatefulSets and access control features.

It's like the early days of HD DVD vs. Blu-ray ... long term, there may be another major momentum shift.
E.T. Cookchief advocate,

"Much of the time, there's no functional difference between Docker Swarm and Kubernetes, but they have fundamentally different ways of getting to that result," Cook said.

The philosophy behind Kubernetes favors API-based service architecture, where interactions between services are often payloads, and "minions" scale up as loads and queues increase, Cook said. In Docker, by contrast, the user sets up a load balancer, which then forwards requests to scaled services.

"The services themselves are -class citizens, and the load balancers expose to the services -- whereas in the Kubernetes philosophy, the service or endpoint itself is the -class citizen," Cook said. "Requests are managed by the service themselves in Kubernetes, whereas in Docker, scaling and routing is done using load balancers to replicated instances of that service."

The two platforms now compete for enterprise hearts and minds, but before too long, Cook said he thinks it might make sense for organizations to use each for different tasks -- perhaps Docker to serve the web front-end and Kubernetes powering the back-end processing.

Ultimately, Cook said he expects Kubernetes to find a long-term niche backing serverless deployments for cloud providers and midsize organizations, while Docker finds its home within the largest enterprises that have the critical mass to focus on scaled services. For now, though, he's hedging his bets.

"It's like the early days of HD DVD vs. Blu-ray," Cook said. "Long term, there may be another major momentum shift -- even though, right now, the market's behind Kubernetes."

Beth Pariseau is senior news writer for TechTarget's Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Next Steps

Stay ahead of ops transformation and learn serverless computing

Kubernetes updates security features; users wait for complexity reduction

Enterprises peer into containers for legacy app potential

Dig Deeper on Managing Virtual Containers