imageteam - Fotolia

Docker and microservices introduce new operational hurdles

Containers and microservices bring flexibility and speed to development environments -- but they can also be a headache for IT operations teams.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: The merits of white box switching

Monitoring, logging and security are just some of the responsibilities of an IT operations team. But as their developer counterparts increasingly adopt a Docker and microservices model, ops teams must consider whether their usual operational tools are sufficient for the task at hand.

In a microservices world, monolithic applications that used to comprise multiple functions are broken apart in to many smaller, discrete services. Now developers can easily create changes to an application or business service without having to overhaul it, and easily scale the service up or down in the cloud on demand. Meanwhile, the collection of microservices work in concert to deliver the same functionality as the erstwhile "monolith," and end users are none the wiser.

Missing in this discussion, however, is the impact of microservices-based applications on IT operations, and what those teams can do to ensure they are up and running and perform according to plan.

Motus, in Boston, offers a vehicle reimbursement service for companies with large fleets of drivers. About two years ago, the company decided to move a monolithic legacy application built in PHP and re-architect it using more of a microservices model.

"The legacy PHP app wasn't very flexible, it didn't allow us to respond to the needs of our customers," said Scott Rankin, Motus vice president of technology. Now that the app has been rebuilt as a series of Docker containers, "dev and QA can spin up new environments very quickly, and servers have gotten a lot simpler," because much of the configuration has been moved out of Puppet configurations and into Docker containers.

The flip side of a Docker and microservices model

That speed and flexibility, however, came at a cost.

"Complexity has skyrocketed," Rankin said. Whereas Motus used to provide its service from just three physical servers—one for PHP, Java and database—the company now manages 30 services on a 12-node cluster running Mesosphere, a container management and orchestration platform.

Besides the sheer volume of things to look after, visibility into the stack suffered from the transition to Docker and microservices. Though a user of New Relic, the monitoring service didn't initially support Docker, and "we lost traceability between apps and servers," Rankin said. 

As an early member of the Docker Ecosystem Technology Program, New Relic soon added support for Docker, fixing Motus' visibility issues. With that support, Motus can turn to New Relic to evaluate containers to make sure they are properly sized, to keep costs in check, for instance.

But generally speaking, not all monitoring tools are fully up to speed with Docker and containers, and no one has fully solved the end-to-end visibility problem that is created when you break up a large application into many smaller components.

The forest for the trees

Getting any performance information about Docker containers is a relatively new phenomenon. Back in February, Docker released version 1.5, which introduced the Stats API that provides real-time information about CPU, memory, network I/O and block I/O utilization for a given container.

The information exposed by the Stats API was warmly received by operational tooling vendors, and "was a great place to start," said Scott Johnson, Docker senior vice president of product management. Johnson said to expect additional enhancements as time goes on, as well as new additional partners to begin incorporating that information into their tools.

But infrastructure metrics aren't really the problem, said Sheng Liang, CEO and co-founder of Rancher Labs, which develops infrastructure services for Docker environments. The infrastructure metrics that Docker provides are "fairly complete," akin to the information you would get in Windows Task Manager.

"At this level, I consider the problem reasonably solved," Liang said.

Further, thanks to automation and best practices from DevOps, containers can be monitored right out of the gate, Liang added.

"Before, monitoring was deployed manually," he said. But now, because deploying container-based applications is so much easier, monitoring gets incorporated as part of the deployment process, he said, either by including an agent as part of the container, or deploying it as a separate container alongside it.

Likewise, organizations that run containers as part of Amazon Web Services Elastic Compute Cloud (EC2) Container Service (ECS) also have monitoring built in, with AWS CloudWatch. "You don't have to worry about it—it's just there and you can start using it," Liang said.

Why does container monitoring require a new breed of IT tool? Click here for part two.

Next Steps

Five steps for moving apps to Docker

New features on the Docker technology roadmap

Getting to know Docker containers out of the box

Dig Deeper on Deploying Microservices