Craft repeatability into container lifecycle management

Carefully identified states and their respective event triggers keep containers acting within set parameters and are key to lifecycle management.

Container deployment involves a repeatable series of steps or processes -- from preparing an application to making it available to users or other apps in the form of containers and clusters. This process-centric approach gives rise to container lifecycle management, and it should include an orchestration tool or script.

Virtualized hosting follows a traditional deployment path, whether for containers or VMs: Prepare applications for hosting -- usually a development or onboarding task; select a pool of resources for the deployment of related applications and components; and then deploy each within that selected pool. Finally, connect those hosting points to one another, to users and to other integrated applications.

Container lifecycle management has specific steps that closely relate to the server resources and virtualization tools in use, so it also has a lot of commonality with operations management processes. Designing management processes to be repeatable across applications will cut your work in half or more and improve application lifecycle management. It also will eliminate other steps you'd traditionally have to take when making changes to virtualization resources, platforms or applications.

Repeatable process requisites

Repeatable processes require not only standardized infrastructure, but also a standardized view of the operations lifecycle. In operations, an application has a normal state and a series of alternative states that are responses to special conditions. The heavy load state, for example, could represent a response to something that's unusual but not considered a fault, while the primary-data-center hosting failure state clearly indicates an error. Conditions or events trigger these states; processes that run are the response. Defining the states and events for each application is a good starting point to optimize processes in container lifecycle management.

A simple application could have three states: normal, failure and loaded.

A simple application could have three states: normal, failure and loaded. For each state, define the conditions that signal when to enter that state, and then identify the steps to get there. This is where careful process design starts.

As an example, let's refer to the process of entering the normal state when you launch an application as the deploy process. This process likely looks similar to the process created to recover from a failure, as well as the one used to spin up another copy of an application in response to high workload demands. Instead of designing three different deploy processes -- deploy, redeploy and scale-out -- simply define one process, and supply parameters to customize it.

The processes defined in application operations are broken down into steps for deploying hosting resources and connecting elements. These two areas are where container operations differ most from VM-based operations. So, start with container-specific processes for deployment and connection, and then move to application processes with a focus on making them reusable.

Define a process and standardize tools to maintain consistency

Container deployment is hierarchical: Create clusters; select a pod or cluster for the application where related components will deploy and where internal addresses are mutually available; and then host those components within the cluster. You should use an orchestration tool or script, such as Kubernetes, to support these processes.

Scaling and redeployment processes are associated with host and cluster conditions, which can be reflected in various ways. Kubernetes, for example, has pre- and post-deployment lifecycle events that you can define handlers for; it also has container and cluster events that you can move to Stackdriver for logging and processing.

If you've taken the time to define standard events, such as a signal for scale-out, scale-back or redeploy for failure, you can define a process template for each of these. When the IT organization defines the processes to handle any sort of events as standard tools, developers and operations personnel involved in container lifecycle management can control the way applications respond to load and failures; they also can manage changes in components associated with in-use applications.

All of this demands a consistent container environment. Having a mix of hypervisors and virtualization or cloud software tools in a VM environment invites errors and instability. The same is true for containers. Pick a single strategy for containers, as well as a single container orchestration tool. Having one hosting environment helps standardize the operations processes.

Standardize how you assign application addresses. Most companies use private IP addresses, as outlined in RFC 1918. It is helpful to assign a group of related applications to a single subset of the enterprise's private address space and then further subdivide it for the individual applications -- and then for the components. Related applications would be hosted in Pods or clusters with standardized addresses, which makes it easier to use DevOps commands to respond to lifecycle events using standard templates.

Not everything in your enterprise is based on container hosting -- even if you have a strong container strategy. Evaluate ways to integrate noncontainer steps, such as how to manage user-to-application connections or how to use VM or bare-metal servers. Standardize these processes, and use similar tools to support them. The result will be a more efficient operation that better serves users.

Next Steps

OpenStack hitched its cart to Docker for lifecycle management

Docker is a de facto container standard -- but it isn't perfect

Keep interoperability in view when assessing container management

Dig Deeper on Managing Virtual Containers