imageteam - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

Docker and microservices introduce new operational hurdles

Containers and microservices bring flexibility and speed to development environments -- but they can also be a headache for IT operations teams.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: The merits of white box switching:

Monitoring, logging and security are just some of the responsibilities of an IT operations team. But as their developer counterparts increasingly adopt a Docker and microservices model, ops teams must consider whether their usual operational tools are sufficient for the task at hand.

In a microservices world, monolithic applications that used to comprise multiple functions are broken apart in to many smaller, discrete services. Now developers can easily create changes to an application or business service without having to overhaul it, and easily scale the service up or down in the cloud on demand. Meanwhile, the collection of microservices work in concert to deliver the same functionality as the erstwhile "monolith," and end users are none the wiser.

Missing in this discussion, however, is the impact of microservices-based applications on IT operations, and what those teams can do to ensure they are up and running and perform according to plan.

Motus, in Boston, offers a vehicle reimbursement service for companies with large fleets of drivers. About two years ago, the company decided to move a monolithic legacy application built in PHP and re-architect it using more of a microservices model.

"The legacy PHP app wasn't very flexible, it didn't allow us to respond to the needs of our customers," said Scott Rankin, Motus vice president of technology. Now that the app has been rebuilt as a series of Docker containers, "dev and QA can spin up new environments very quickly, and servers have gotten a lot simpler," because much of the configuration has been moved out of Puppet configurations and into Docker containers.

The flip side of a Docker and microservices model

That speed and flexibility, however, came at a cost.

"Complexity has skyrocketed," Rankin said. Whereas Motus used to provide its service from just three physical servers—one for PHP, Java and database—the company now manages 30 services on a 12-node cluster running Mesosphere, a container management and orchestration platform.

Besides the sheer volume of things to look after, visibility into the stack suffered from the transition to Docker and microservices. Though a user of New Relic, the monitoring service didn't initially support Docker, and "we lost traceability between apps and servers," Rankin said. 

As an early member of the Docker Ecosystem Technology Program, New Relic soon added support for Docker, fixing Motus' visibility issues. With that support, Motus can turn to New Relic to evaluate containers to make sure they are properly sized, to keep costs in check, for instance.

But generally speaking, not all monitoring tools are fully up to speed with Docker and containers, and no one has fully solved the end-to-end visibility problem that is created when you break up a large application into many smaller components.

The forest for the trees

Getting any performance information about Docker containers is a relatively new phenomenon. Back in February, Docker released version 1.5, which introduced the Stats API that provides real-time information about CPU, memory, network I/O and block I/O utilization for a given container.

The information exposed by the Stats API was warmly received by operational tooling vendors, and "was a great place to start," said Scott Johnson, Docker senior vice president of product management. Johnson said to expect additional enhancements as time goes on, as well as new additional partners to begin incorporating that information into their tools.

But infrastructure metrics aren't really the problem, said Sheng Liang, CEO and co-founder of Rancher Labs, which develops infrastructure services for Docker environments. The infrastructure metrics that Docker provides are "fairly complete," akin to the information you would get in Windows Task Manager.

"At this level, I consider the problem reasonably solved," Liang said.

Further, thanks to automation and best practices from DevOps, containers can be monitored right out of the gate, Liang added.

"Before, monitoring was deployed manually," he said. But now, because deploying container-based applications is so much easier, monitoring gets incorporated as part of the deployment process, he said, either by including an agent as part of the container, or deploying it as a separate container alongside it.

Likewise, organizations that run containers as part of Amazon Web Services Elastic Compute Cloud (EC2) Container Service (ECS) also have monitoring built in, with AWS CloudWatch. "You don't have to worry about it—it's just there and you can start using it," Liang said.

Why does container monitoring require a new breed of IT tool? Click here for part two.

Next Steps

Five steps for moving apps to Docker

New features on the Docker technology roadmap

Getting to know Docker containers out of the box

This was last published in November 2015

Dig Deeper on Deploying Microservices

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

The usual operational tools used by the ops teams are no longer sufficient for the monitoring, logging, and security needs they face as a result of microservices and containerization. Tools are designed to address a specific problem and, when that problem changes or is refined, as in the case of containers and microservices, the tools themselves need to be retooled, so to speak. That doesn’t mean they need to discarded. In the case of monitoring, it really seems like a matter of what granularity you need to monitor. In many cases, legacy monitoring tools may be able to provide, with minimal change, the level of monitoring required of microservices and containers.
Cancel
With the level of abstraction and the dispersion of systems, the ability to monitor those systems get a bit fuzzy, at least that has been my experience. Fortunately, I haven't had to do this on a large scale, so I can hunt down most of the systems once I figure out where everything is. I can imagine for larger organizations, this could be a real challenge.
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

Close