tiero - Fotolia
AUSTIN, Texas -- Chef CTO Adam Jacob saw the writing on the wall almost a year ago.
Applications and IT infrastructure began to change rapidly, with the popularization of containers by Docker, evolving to be more portable and faster-moving. Golden images and quick startups were becoming the rule of the day. So, Jacob, the creator of Seattle-based Chef Software Inc., went back to the drawing board to write a new product from scratch, called Habitat, which was released last month.
We caught up with Jacob this week at ChefConf here to ask some probing questions about Chef Habitat, how it works and its benefits in a containerized world.
Chef Habitat uses a packaging format to bundle dependencies with the app and make it portable across systems. Is that like Linux packages?
Adam Jacob: By looking at the application's whole lifecycle, one of the things that you discover is -- for example, let's say there's an OpenSSL vulnerability, which happens with some regularity, like Heartbleed. How do you know which applications ... are using [Secure Sockets Layer] SSL and are using a vulnerable version? Let's say I'm using Red Hat, and I look at [Red Hat Package Manager] RPM. And I look at the packaging database that Red Hat ships, and I check to see if I'm using a vulnerable version of SSL, and then Red Hat ships me a fix. The bummer there is that the applications have to link against that library, and so they read that thing into memory. But they're not going to reread it into memory because you changed it on disk. So, because you patched the server by installing the right OpenSSL, what services need to restart so that, in memory, they have the right version? How do you know? Did they restart?
Most of the way we evaluate packaging is just by saying, 'Do you have the right package installed?' -- which is true, but insufficient. There are a bunch of ways that people are trying to solve that problem: Most of them start at the end of the process, where they try to look at a system after it's been built and try to tell you if it's secure or not.
[Chef] Habitat flips that over and says, 'Well, actually, if what we needed from the beginning is that there's only one of two states, either it's running the right version of OpenSSL or it's not, we should build a build system that says your application will only be in one of those two states -- secure or not secure.' And, by definition, if you are running the secure version of SSL, you are running the right version in memory, because that's the only version you could be running.
That's an example of where we had to go a little deep. We didn't start out like, 'Oh, wow, I can't wait to build a new packaging system for Linux.' That wasn't my goal. But one of the things you have to do to an application as part of its lifecycle is ask and answer that question.
So, if I'm deploying at scale, how do I wrap the Habitat packages around, say, hundreds or thousands of applications?
Jacob: If you have hundreds or thousands of applications, most likely, you're doing some kind of continuous integration [CI]. Mostly, what that's doing is it generates a build. With Habitat, you write what's called a plan. The plan is what you call to build the software -- it includes the software [and] what's configurable about it as it goes through its life, and that becomes what you generate out of CI. That's the artifact that you move around through the rest of the lifecycle. For each of those things, you would write a Habitat plan.
Can you talk about how it relates to containers?
Jacob: A container takes some slice of software and runs it in a namespace. Our job is to make applications that are easy to build, deploy and manage. By doing that, it turns out that it's very easy to then put those applications in containers, because the way you build, deploy and manage them is well-suited to being contained.
For example, I have a list of all the dependencies for your application. So, you say you want to run it in a container -- to me, that's just a packaging format, so you export it and you run it in a container. And that might make life easier for you, because then you can use container schedulers -- you can use [Apache] Mesos or Docker to handle the decisions about where things run, but you've got all the manageability benefits that Habitat gives you. What those systems don't give you is the parts of Habitat that have a supervisor that does service discovery and manages your topology and update strategies. Those things don't exist in Docker in the way that they exist in Habitat.
So, Habitat makes containers better, and it's also useful on bare-metal operating systems; and in both cases, you manage the application in the same way. The decision to deploy an application on containers is now separate. You're not doing it just because that's the only way to make it manageable software.
In general, configuration management software is up against things like Google Kubernetes and Mesos for deploying and managing containerized environments. Is Habitat Chef's answer to those environments?
Jacob: The question is interesting. Why do you need an answer to those environments? Cluster managers are cool. There are real benefits to having a cluster manager manage your application. But let's go back to the hundreds or thousands of applications. The introduction of that cluster manager can be good for one application, but it leads to yet another way of thinking about how we manage the software. If you think about the people that have to manage that software, now they need to understand the cluster manager, the service discovery bus that it connects to, which will be different from things they do in the traditional enterprise -- and it's because the management and deployment of those services is coupled with the infrastructure, and that's a mistake.
What Habitat does is say, 'Look, use a cluster manager.' But what I want to be able to say is, from the point of view of someone that's trying to connect with that service, 'Why do I have to know that it's running in Kubernetes?'
A good example here is if I want to discover a physical service that's running outside Kubernetes. Kubernetes has now written a bunch of code, so that I can write more code, so that I can shim those services into this Kubernetes discovery system. That is a leaky abstraction. That doesn't make sense. The application itself should be able to do that, and it shouldn't care what service it's running in. I think those platforms are great, and there are a bunch of problems that gain huge benefit from using them. But you also need a consistent way to build, deploy and manage your applications, regardless of what infrastructure they're on top of.
The other thing I've heard from users is that in an environment where containers spin up in seconds, Chef can take minutes. Does Habitat solve for that?
Jacob: The reason a container spins up in seconds is because you baked a golden image. The problems with golden images remain: How does it change? And if it changes, how do we know? And [container users] are like, 'It never changes, just write a new image.' [Chef] Habitat solves for that by saying, that property, 'The thing we wanted out of the golden image is that it needed to start fast, so it just needed to be ready, and then it also needed to be only what I said was in the package.' Habitat does that at the packaging layer. So, if you're saying, 'I want to run this package,' Habitat, regardless of where you do it, will just run that release of that service. So, it's immutable and golden in the same way that a container image is, [and] it gives you similar benefits.
The reason that Chef takes time is that you're trying to manage drift. The thing that tools like Chef do that don't happen in golden-image land is they don't manage for drift at all. [Chef] Habitat manages that drift for the application -- things like configuration, tuning, those things can drift -- in the same way that Chef Server does, just in a much tighter loop, and only for its zone. It doesn't manage the whole server -- just that one instance of the application.
Chef ties together automation features with Automate
Chef puts continuous delivery on the menu
Inside open source configuration management options