A fresh approach is needed to manage containers properly

Maksim Samasiuk - Fotolia

What admins need to know to master containerization technology

Running applications in containers makes sense when it comes to resource usage, but how will your operations team handle the related management tasks?

Containers have inserted themselves into the IT conversation, and their usefulness is being discussed in ever-widening circles. While adoption has been limited so far, it seems clear that 2017 is going to see a lot more production deployment of applications in containers. Deploying anything into production, of course, is when the operations team gets involved -- and they're going to have questions.

There are plenty of issues to consider when it comes to management of containerization technology. These include how to handle dependencies in files and the role microservices may or may not play in your cloud strategy. To be sure, there's a learning curve.

DevOps brought the idea that developers should support production. The reality is that developers need their sleep, and it's the operations team that looks after production at all hours of the day and night. Operations teams will need to understand the impact of these new containerized applications, and they are the ones expected to resolve availability or performance issues.

Also, an IT organization might need new tools to monitor and manage containers. Containers, based on how they are used, can have important operational impacts on an IT organization. To properly manage containerization technology means admins must take these factors into consideration and figure out a plan to make it all work.

Manage applications and dependencies

Sometimes containers are just a way to package and distribute an application. When an existing application becomes distributed as a Docker container, for example, that packaging is essentially the special value that Docker provides -- although it's not the only reason to use Docker or containers. Containers existed before Docker, which is used to create containers in a structured and controlled way.

In some cases, the container simply wraps up the application and its dependencies. The container is then run on a server. The magic of Docker is to wrap all of the application's dependencies into a Docker image and have a single text file (Docker file) that describes how to create the image. With this existing application model, each server may run just one instance of the container -- just like the server used to run one instance of the application.

It's fairly simple to monitor and manage this use of containers: continue to monitor and manage the server. The admin can still see the application processes on the server, along with resource usage.

This is a far more dynamic environment than the monolithic applications that admins are used to managing.

In some ways, this is even simpler as there is no need to check that the server has all of the application's prerequisites. These prerequisites include the correct version of Java or the appropriate Python libraries. All of these dependencies are in the Docker image and controlled by the Docker file.

It might not be necessary to install and maintain Java on the server. But you should have some control of these Docker files and ensure that the Docker images contain components that are up to date. Rather than update or patch the Java on the server, you need to update the Docker file and build a new Docker image.

One new operational task may be to scan the Docker files for vulnerabilities. A Docker file may direct the installation of vulnerable or unsupported components into the image. It may also be necessary to implement policies about the maximum allowed age of a built Docker image. The versions of the dependencies are fixed inside the image and can only be updated by building a new image.

Containerization technology is dynamic

Sometimes containers enable microservices architecture. This makes them a whole new way to assemble applications.

Containers can be started and stopped much faster than VMs. Starting another container typically takes a fraction of a second. A single server can run multiple containers at once, with some isolation between the containers. In applications built in the microservices way, containers will be far more numerous and often short-lived. The application is broken into small parts – microservices -- and will have a Docker image for each of the dozens of small parts. Each microservice can scale in and out by creating or destroying containers.

This is a far more dynamic environment than the monolithic applications that admins are used to managing. The underlying servers can still be monitored with your normal tools, but the containers themselves are too volatile for these tools.

A single container may only live for a few seconds. New tools will be required to manage and monitor these microservices applications in production. Hyperscale vendors such as AWS have built their own tools to manage their fleets of containers. Large organizations are more likely to use tools from providers such as New Relic and Datadog to monitor their container fleets.

Schedulers, tools make a difference

An essential part of DevOps is to have automation around all parts of the application, which includes the many containers. This is where schedulers play their part: to make sure that the right number of each container is running and healthy. The scheduler may be something container-specific such as Kubernetes or a more general-purpose scheduler such as Apache Mesos. It might even be Kubernetes and Mesos together.

The automation reduces the amount of manual intervention required to monitor and manage the application. Management should be based on setting policies for the application. The scheduler simply implements the policies.

To deploy monolithic applications into production with containerization technology may be a small change to the management and monitoring of those applications. If the applications are broken up into microservices and require a scheduler, then there will be far more significant operational changes.

While it is still early in the development of the operational tools for fleets of containers, there are a lot of tools being built for container management. Successful operations of microservices applications will need new tools and methods for operations teams.

Next Steps

How microservices and containers are changing data centers

Select a strong container orchestration tool

The real differences between VMs and containers

Dig Deeper on Managing Virtual Containers