olly - Fotolia

Brendan Burns talks Kubernetes on Azure work, Windows beta

Kubernetes co-creator and Microsoft distinguished engineer Brendan Burns discussed how Kubernetes on Azure evolved and invited Amazon to an open source serverless project.

AUSTIN, Texas -- Kubernetes on Azure featured prominently amid a flood of container news this week, as Microsoft hatches plans for multi-cloud workload portability and management.

Kubernetes 1.9, set to ship this month, will bring Kubernetes for Windows closer to parity with support for Linux containers. Shared IP addresses, for example, will allow more than one Windows container to share a network connection within a pod. There's still plenty of work to deliver a stable release in 2018, but it's already in production use on Azure.

Microsoft also disclosed work in the Kubernetes community to support Heptio's Ark backup and recovery utility, as well as Tigera's Calico network security policies, and to create a bridge between Kubernetes and Azure Container Instances, called Virtual Kubelet.

Beth Pariseau, senior news writer at SearchITOperations, sat down at KubeCon here this week with Brendan Burns, one of the co-creators of Kubernetes at Google, who made a high-profile move to Microsoft as director of engineering 18 months ago. Burns, now a distinguished engineer at Microsoft, talked about the Azure team's current projects, extended a collaborative olive branch to Amazon and shared some of what's ahead on the roadmap for Kubernetes on Azure.

What's the state of Kubernetes for Windows now?

Brendan Burns, distinguished engineer, MicrosoftBrendan Burns

Brendan Burns: It's going to beta in the 1.9 release. We run it right now. Azure Cloud Shell [is] an interactive, web-based terminal you can get to use Azure -- if you use PowerShell, you're running inside of a Windows container running on Kubernetes. People can take their applications to production with it as it stands.

We're still working to close some gaps between Linux container support and Windows container support, just because we started on it a little bit later, but we're getting there rapidly. With the Windows 1709 release, the gaps are rapidly [being closed].

What gaps still exist?

Burns: We're still completing work on mounting files into containers on Windows. Prior to 1709, you could only have one container in a pod, but that's been fixed in the 1709 release, the Fall Creators Update.

There was a networking difference between Linux and Windows that also got in the way.

Burns: There was the one network per container, per pod, which definitely was not the same as Linux, but that's been fixed. The proxy that does the service load balancing is a user space proxy, rather than using IPtables. We're working with the Windows team to figure out how we can use kernel capabilities to work that out, as well. We work closely with Apprenda, who did the work in kubeadm to join Windows nodes to a cluster on premises.

What else is going on with Kubernetes on Azure?

Burns: We announced serverless containers a few months ago with Azure Container Instances. Now, we're moving forward with the work we did to figure out how to connect those serverless containers to Kubernetes. We see an industry progression toward this technology being available to everybody. We kicked off a project, Virtual Kubelet, that wasn't just about Azure Container Instances, but about serverless container instances in general, and how we can have a general discussion out in the open with the Kubernetes community.

Ultimately, we would really like it to be a subproject within the Kubernetes community. We will work with the scheduling special interest group, and we hope to work with all of the different serverless container platforms that have been announced.

Does that include Amazon?

Burns: If they're willing to come and participate in an open source community, we'd love to have them.

What will Virtual Kubelet do for the end user?

Burns: You can schedule your containers into the serverless container infrastructure like Azure Container Instances [and not] worry about the operating system at all. In fact, you don't even see the operating system. You only pay while your containers are active. So, if you combine Azure Container Instances with the Azure Kubernetes Service and the connector, you could get to a world where you may have some serving jobs that run on dedicated virtual machines, but all of your test jobs and batch infrastructure runs on Azure Container Instances, and you don't pay for those resources when they're not in active use.

[That] can represent pretty significant savings for people who have bursty or intermittent workflows. You don't want a virtual machine that sits and runs all the time just so you can run something every hour. But you might want to use a scheduled job in Kubernetes, so the combination of Kubernetes and Azure Container Instances allows you to run that scheduled job and only pay for the five minutes that the container is up and running without having to do anything else.

How is that different from how ACI [Azure Container Instances] works right now?

Burns: It's the orchestration management. ACI doesn't have a notion of a scheduled job or replicating things. Kubernetes provides the orchestration primitives. ACI provides the runtime. We could have built lots of fancy orchestration on top of ACI that was bespoke and dedicated to ACI, but we believe strongly it's better to integrate with a commodity open source API, with all the tooling around it, than our own. Building this bridge is a way of building orchestration capabilities for ACI, but in an open way.

What are you up to with Heptio?

Burns: They're coming out with a product called Ark, which is a backup and recovery solution. It's like OneDrive for your Kubernetes cluster. Just like OneDrive is a cloud backup solution you can use for any of the files on your computer, Ark with Azure is a way to back up an on-prem Kubernetes cluster into Azure and restore it. Or, [you can] even take an on-prem cluster, back it up to Azure and then migrate it to an Azure Container Service cluster. It addresses a need that's unmet right now, and a sign of Kubernetes maturing, that we're now thinking about how we do disaster recovery.

Kubernetes has become a commodity. It's available as a service on every single public cloud. It's becoming like SQL or anything else; you just assume that you can use it. The interesting thing that we're looking at is what you can build on top in terms of capabilities.

We hope to work with all of the different serverless container platforms that have been announced ... if [Amazon is] willing to come and participate in an open source community, we'd love to have them.
Brendan Burnsdirector of engineering, Microsoft

How does a cloud provider like Azure differentiate in that world?

Burns: You differentiate based on the quality of service and the pricing, as well as the integration with other services. Azure has policy enforcement, which can ensure every database has particular characteristics. That's a differentiated feature that you get, even though you consume it through the Service Broker API.

You're going to get the differentiated enterprise management features that Azure provides. It also doesn't mean that you only use it with services that are available on every cloud. With CosmosDB, Azure provides a Mongo API. There's no other hosted Mongo that's out there. So, with Service Broker in Azure, you can connect to another cloud, and it would spin up containers. But that's less attractive than a hosted service, because you're on the hook for management.

Does Microsoft have any tool that offers a 'god view' of all these different services and how they're managed?

Burns: The Azure Portal has a customizable dashboard where you can drag and drop tiles that represent all the different resources into a dashboard and make multiple dashboards. That's probably the closest.

As we see more people coming into the cloud and as people build bigger services, the visualization and slicing and dicing through your infrastructure has become a bigger problem. We're going to have to continue to raise the bar on how we present and visualize stuff to the end user. I'm optimistic that we'll see tools for that sort of management at scale out of Azure.

How do you avoid overwhelming customers with everything that's going on within the infrastructure and services?

Burns: We try to provide tailored documentation and experiences, so we don't just dump everything on them all at once, but that's a challenge. There's a lot for people to learn and places they can go, and you have to enable people to focus on the specific tasks they might want to accomplish.

In my keynote, I [talked] about building distributed systems for people who might not traditionally have been distributed systems developers. It's too hard to build these systems right now, and Kubernetes makes it easier for the people who already know how to do it, but it doesn't really make it easier for people who don't. We're going to have to build more layers on top to do that.

I'm launching a new open source project called Metaparticle dedicated to that. It's like the Typescript project, in the sense that it's an independent entity, not an Azure thing. I've been tinkering on it in my basement for a year or so.

Beth Pariseau is senior news writer for TechTarget's Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Dig Deeper on Managing Virtual Containers