Rawpixel - Fotolia


Docker instances redefine containers, but have drawbacks

Docker's service-centric approach to container-based virtualization takes virtualization technology to a whole new level, but still has some serious drawbacks.

The entire virtualization world is buzzing with talk of Docker -- especially Docker instances -- with references to the open source software container platform everywhere. So what, exactly, is Docker, and could it be right for you?

Docker is the latest evolution of virtualization. Most IT people are familiar with VMs; in a classic virtualization setup, every VM has its own separate memory space, disk and operating system. When you have several hundred VMs, the amount of memory reserved for the separate operating systems can be massive.

Docker is about more than just efficiency. Classic application design is monolithic in nature, and, essentially, everything is put into the same VM. Managing and building applications for the VM means a single fault or bad build can take out the entire VM, and troubleshooting can be difficult. Running against this perceived waste, Docker operates on the concept of virtualizing services rather than applications. It follows the UNIX mantra, "Do one thing and do it well." How does this translate into the real world?

An example of Docker instances in use would be an online video streaming service such as Netflix. The entire offering can be broken up into a range of discrete services that together provide the full application. On a macro level, there would be a set of services to provide all the required components, such as account services, network streaming services and video encoding services.

This service-centric approach has a number of benefits, including the fact that one part of the application can be modified and deployed without having to recollect or rebuild the entire application. It also reduces the amount of quality assurance required, as only the modified parts and the parts that interact with them have to be vetted. This can equate to direct time and financial savings.

Another significant mark in Docker's favor is that deploying several VMs in the classic sense takes several minutes, while you can deploy Docker images in a matter of seconds.

Docker creates a software container where the container is, essentially, a file system that runs in a jailed instance. Each of these file system instances -- Docker containers -- runs on top of a modified Linux OS.

The upshot of these items is that you can run several of these Docker instances on a single physical host. There is no waste in terms of duplicated OSes, as they share a common OS layer.

Docker makes it easy for developers to check out the latest available version, and, thanks to its containerized nature, developers aren't subject to slight variations in their configurations.

Docker's shortcomings

That said, Docker isn't for everyone. Docker can only virtualize Linux-based services. The Windows Azure service will run Docker instances, but as of the time of this writing, Windows services cannot be virtualized.

Another significant mark in Docker's favor is that deploying several VMs in the classic sense takes several minutes, while you can deploy Docker images in a matter of seconds.

Perhaps the biggest hurdle is managing interactions between all the instances. With all an application's components split among different containers, all the servers need to talk to each other in a consistent manner. This means anyone who wants a complex infrastructure must master application programming interface management and a clustering tool, such as Swarm, Mesos or Kubernetes, to ensure the machines are working as desired with failover support.

Docker is, by nature, an additive system. It is possible to build up an application using several different layers of a file system where each component is added above the previously created one. It can be likened to a file system sandwich. This layering effect results in yet another layer of efficiency savings, for when you rebuild a Docker image with changes, it rebuilds those parts above where the change occurred rather than the full Docker image.

Perhaps more importantly, Docker is designed for use in elastic scale computing. Each Docker instance has a finite operational lifetime, as the number of instances grows and shrinks according to demand. In a properly managed system, these instances are born automatically and die when they become unnecessary.

The downsides of a Docker environment mean there are several items you need to consider before jumping on board. First, Docker instances are stateless. Essentially, this means they shouldn't hold any transactional data. All the data should end up on a database server so it persists.

Second, developing Docker instances isn't as simple as creating a VM, adding in applications and then cloning it. To successfully create and use Docker infrastructure, an administrator will need a deep understanding of several different aspects of system management, including Linux management, orchestration and configuration tools such as Puppet, Chef and Salt. These tools are inherently command-line-based and script-based.

Docker whalesay image.
Figure A. Running the Docker whalesay image.

As mentioned before, Docker is designed for Web scale deployments. Other required components include version control and automated orchestration tools.

It's possible to experiment with Docker on most laptop or desktops -- a fun first start is whalesay -- but getting fully acquainted may take a little time. If you are looking to build elastic capable applications, do yourself a favor by looking at Docker as a first port of call.

Next Steps

How does storage work in Docker?

The battle between VM performance and storage

Docker leads the software container charge in cloud

 A guide to the latest in containerization

Dig Deeper on IT Ops Implications of Continuous Delivery