The engineer who founded Kubernetes within Google, as well as the Cloud Native Computing Foundation, now looks to bring Kubernetes to the enterprise.
Craig McLuckie, along with other ex-Googlers, launched Heptio, which secured $8.5 million in funding last week. The goal is to make enterprise Kubernetes more accessible to IT administrators who want to use it across multiple cloud environments.
We caught up with McLuckie, CEO of Heptio, based in Seattle, to discuss the new venture, his role in the Cloud Native Computing Foundation and where he sees room for improvement with runtime alternatives to Docker.
What exactly is so hard about getting Kubernetes up and running today? What problem is Heptio trying to solve for enterprises?
Craig McLuckie: Kubernetes is an open source project that we built at Google, and we put a lot of time and care into making sure that the core technology was great. The way Google tends to use the technology is to deliver it as a service through Google Container Engine. Obviously, there are a lot of other folks in the community who are putting resources into making sure it works in a variety of environments. But there are a couple of gaps -- in terms of the basic usability, just the process of going from 0 mph to 10 mph, getting the basic Kubernetes cluster installed, getting it up and running, having access to better documentation -- so just that very first experience could use a little bit of help.
Then, when you go beyond that initial experience, a lot of enterprise customers that we talk to need a little bit more help to identify and debug production issues, get high-level support -- a lot of them need access to training resources and such. So, we aspire to make Kubernetes more accessible and make sure that there's a great supported configuration of Kubernetes available in a variety of environments, whether it's in Amazon [Web Services], [Microsoft] Azure, Google, OpenStack or on-prem on bare metal.
Can you give me an example of a gap in that 0 mph to 10 mph scenario?
McLuckie: Just getting Kubernetes up and running on Amazon [Web Services] is hard. The way that installation works today is, it's a relatively simple set of install scripts that the community put together as a convenient framework. And it was really written by enthusiasts, folks who were passionate about it and wanted to make it work, for enthusiasts, rather than being written by enthusiasts for the general public. And so there's just some simple work to do to refine the installation process, make it easier to get a cluster up and running, support the updating of the cluster, which we would like to do, and we'd like to do it in the open. We want to give that back to the open source community and help them get over the hurdle of using Kubernetes.
Why are enterprises still interested in Kubernetes, given the learning curve?
McLuckie: Kubernetes solves a lot of difficult problems in the application orchestration space. And for enterprises, it offers up several unique advantages. The first is that it creates radically improved efficiencies. I consistently see enterprises that deploy Kubernetes achieve somewhere between 50% and 80% efficiency improvements over traditional virtual machine deployments, which is significant.
The second thing that it does is it creates a much more practical way to programmatically manage the lifecycle of applications. Effectively, you're able to start thinking about your application as something that's being run and operated as a service. And the mechanics of dealing with the lifecycle of the application, whether it's a scaling event, updating the application, monitoring the health and maintaining the health of the application, all of that gets taken care of for you by this platform. And it's a very effective and capable basis for application management, especially when you compare it to some of the more traditional DevOps tools that are out there.
And then the final thing that I see a lot of enterprises being really excited about is that it creates an environment that's consistent between on premises and cloud, and between clouds. Enterprises are looking at this as a way to have a single toolchain that allows them to build applications they can run on premises, or it could run on any public cloud, without having to retrain their developers. That's exciting to a lot of companies as well.
It's in Google's interest to spread the word about Kubernetes. What made you leave the company to pursue this?
Craig McLuckieCEO, Heptio
McLuckie: It is in Google's best interest to do this, and it was always the plan with Kubernetes from the start. The reason I stepped out of Google to do this in a startup is that there's just some places that Google can't practically go. It'd be very hard for Google to make sure that [enterprise] Kubernetes works really well on Amazon [Web Services], or on Microsoft [Azure], or to help coordinate so that the rendition that's running on Amazon has the same characteristics as the rendition that's running on Azure. It's in Google's best interest that this project succeed, and what I see us doing is highly complementary to [Google's] mission, in making Kubernetes more accessible in environments where Google can't practically go.
How do you see efforts like CRI-O coexisting with Docker in the market?
McLuckie: I've made no pretenses that we would like to see Docker be the standard image format. I've long been a supporter of Docker, and I've bet significantly on it, but there's a big difference between being a standard format and being the only runtime that can use it. I think there are a lot of advantages to having different runtimes that are relatively unencumbered that can be tailored for certain unique performance or security scenarios, or other areas. We as a community should rally around the Docker format itself, because they've done a great job of creating something that's well-done and people like it, but I don't think the community should be excluded from continuing to innovate on the runtime environment for that image. I'd love to see a number of these emerge as a vector of innovation.
Where is there room for innovation in terms of performance or security scenarios?
McLuckie: A lot of interesting work could be done to establish a deeper chain of trust to the physical infrastructure where something's running. A lot of work could be done around license management for containers in the security space. There's a lot of work that could be done to optimize the operational viability of Docker. It's just been a technology that's tied together a couple of things that makes it harder to update. For example, the lifecycle of all the containers that are being run by Docker are tied to the lifecycle of the Docker daemon, which means if you update the daemon, you have to tear down all the containers, which makes it undesirable from an operations perspective. But I'm sure that's something Docker could address in time.
From a performance perspective, often as not, depending on what workload you're running, there are ways to tie it better to the way that networking or storage are integrated. I've seen a lot of different ways that people tend to run and optimize workloads. A good example of this would be container activation. With [Amazon Web Services] Lambda functions that we see today, one of the key overarching scenarios is how long does it take to activate a piece of code that's running Lambda. One of the optimizations I'd love to see the community drive is ways to very efficiently layer in code function into a warm container and then be very efficient around memory management around that. That's something I could see somebody building, and I think it would be wonderful for the community. And it wouldn't be the Docker runtime, but it would be uniquely useful to everyone.
The AWS Docker Trusted Registry aims for enterprise standardization
Where do stateful apps live in a containerized world?
Orchestration wars: Docker Swarm vs. enterprise Kubernetes