Containerization, namely Docker, is the hottest cloud technology in the enterprise space. Docker is lightweight...
and efficient. And it's simple for enterprise IT to deploy with Docker. But, sometimes, simplicity comes at a cost.
Missing pieces could cause issues for users who deploy with Docker in production, prompting a move to extend the containerization tool in several important areas. Amid security concerns, for example, Docker has improved its security tools over the years. But operational issues beyond that demand attention.
Container networks tangle up traffic
Networking is the biggest Docker deployment challenge that must be addressed. Docker envisions a basic IP-subnetwork model to deploy applications in containers, so it connects application components into the appropriate subnetworks. Difficulty visualizing what actually happens when you connect Docker containers to build componentized applications leads to inefficient data routing. Docker also doesn't address the larger issue of the company IP network and the way in which applications connect with users.
Software-defined networking (SDN) is the best way to address Docker networking issues. Docker uses basic virtual switching, but without a formal management framework to control how connections are built and managed, it doesn't improve the overall network efficiency or reduce operations costs and errors. An SDN architecture is needed.
Vendors such as Cisco, Juniper Networks, Nuage Networks from Nokia and VMware with parent company Dell Technologies offer SDN packages with specific Docker adaptations. These make it possible for an enterprise to plan and deploy Docker containers within a unified approach to application access and efficient IT operations. These SDN packages also reduce configuration errors that can affect security, even for VM-hosted applications.
Release to production -- now what?
IT operations is another issue Docker doesn't fully address. Adopters increasingly recognize that deployment and redeployment errors that operations staff makes cause application security, compliance, efficiency and availability problems.
A DevOps tool enables the container administrator to define deployment and redeployment as a modular structure that's validated in the lab and then deployed in production. This automates all the steps in an application lifecycle and reduces common ops mistakes.
The Docker Compose facility and the Google Kubernetes container orchestration tool both perform application lifecycle automation in Docker. And Docker's clustering capability helps users deploy and redeploy application components in a set of suitable hosts. More widely used DevOps tools for configuration management and automation, such as Chef, Puppet and Red Hat's Ansible, work with Docker containers as well, though many users find these tools difficult to learn and apply.
IT organizations must decide on the nature of their DevOps or containerized deployment. Is it 100% containerized, or will the operations team also handle virtual machines and server operations? Support a mixed infrastructure model through a cooperative relationship between a general DevOps tool, such as the configuration management systems listed above, and a container-specialized tool, such as Compose and Kubernetes.
Docker deployment best practices are hotly debated. Continuous deployment and operations automation can be facilitated by container orchestration and the right SDN choices, but expect rapid changes in this area.
How do we handle this?
A related issue in application lifecycle management is the handling of events, meaning operational conditions that affect containerized applications. Docker applications tend to let a network or component failure manifest as a connection failure -- TCP/IP, usually -- so the application sees the problem directly. This can lead to significant delays when responding to problems, to the point of compromised availability.
Kubernetes has set the defining practice for application lifecycle event handling in Docker deployment systems through container lifecycle hooks that provide a visibility into the overall cluster health. This enables users to define reactions to unusual conditions that don't depend on a visible application failure. Event handling is a requirement for any container DevOps and orchestration strategy.
Sweat the little, even micro, things
The final issue when businesses deploy with Docker is its support for microservices. And this issue is, in many ways, a container for the other Docker issues.
Microservice architectures break applications into very small and reusable components, creating greater flexibility in application composition. The challenge is microservice implementation can take many forms: from bundling microservices into machine images with the rest of an application to deploying them for use in many internal or even partner-facing applications. All of these options demand a different Docker deployment model.
Docker orchestration is critical to support microservices, and so is a business-wide SDN model for Docker networking. The good news is microservices are typically built to be stateless, so every instance can process a request identically. That makes it easier to manage application performance and availability through component scaling, which simplifies operational management of Docker deployments. Thus, if you spend a little time on microservices, container networks and orchestration, it can pay dividends to improve the user experience.
Docker doesn't do everything. As it become more clear what users want it to do, Docker expands to improve its capabilities in those areas. Docker orchestration will continue to evolve; and Docker networking will improve, particularly when you take into account microservices' range of network models. In short, Docker will get better, and tools to improve it further continue to make it the leading platform for containers.
Test your Docker mettle
From the trenches: What worked with Docker
How container-as-a-service providers can ease deployment