Container images are the starting point for deployments, and the means to create and manage them vary. DevOps professionals must understand the role of containers, their place in the DevOps continuum and the specific tools to create them.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
"Think of a traditional operating system as a pot of soup, with the developers and systems administrators being two chefs, adding ingredients," said Scott McCarty, senior strategist for containers at Red Hat. Each chef has to worry about whether the other added garlic or too much broth before they put ingredients in. In contrast, a containerized environment is soup by the bowl."
"Each development team gets to make their own bowl of soup [instead of sharing one large pot], but the systems administrators still have interest in certified ingredients," he said. The organization must control software packages, libraries and language runtimes, as well as what types of language-specific libraries are used, such as Python, Ruby and Node.js. Start with known good base container images to build off of, McCarty said, and verify the security of content that the developers pull off the internet from open source repositories. Container tools relevant for developers are not always appropriate for the ops side.
What does the container image hold?
Container community members and vendors create a range of images that are convenient, secure and easy to build from. There are three main kinds of container images:
- Base images provide a starting place for a custom container image built largely from scratch. McCarty recommends that container managers use the same criteria to choose a base image that they would use to choose a Linux distribution.
- Intermediate images provide ready-to-use language runtimes, with all of their dependencies. These images are complete except for the developers' code.
- Application images provide a ready-to-use form of a database, caching server, web server, mail server, domain name system or sophisticated application made up of multiple components. They're often provided by a software vendor so that customers have a verified image of the product.
Development and operations collaborate to determine how container images are built, McCarty said, and what ingredients go into the bowl of soup.
Scott McCartyRed Hat
Enterprise IT organizations typically put an existing application into a container or rearchitect an application to microservices encapsulated in containers. Container deployment choices depend on how an organization plans to use the technology at the application level, said Ed Featherston, vice president and principal architect at Cloud Technology Partners, which, at the time of publication, Hewlett Packard Enterprise is in the process of acquiring.
Architecturally, Featherston said, running existing applications in containers is not significantly different than hosting the apps on VMs or cloud instances in an existing DevOps application deployment environment. The main benefit is portability; that app container will run on the selected OS wherever it is deployed, without differences in IT resources playing a role. This portability has a cost: the extra layer of abstraction and communication for not a lot of benefit in return.
"I liken it to the days of wrapping a legacy system with an API, abstracting it away from the other systems, with the plan of rearchitecting the legacy later -- except that rearchitecting never happens," Featherston said.
Ed FeatherstonCloud Technology Partners
Rearchitecting an application into microservices can yield heightened benefits when combined with containers. Microservices deployed in containers combine into the basis for larger, more complex systems. Developers find this combination supports the Agile methodology, as it is much easier to change, update and deploy a microservice than a monolithic application, Featherston said. Managing microservices in a container means the service theoretically can run anywhere once it is tested and validated in the container.
"For developers and business organizations ... it facilitates the fast-moving, agile world of technology," he said. However, for IT operations, container-isolated microservices can add to the complexity of a deployment.
Container images in ops' hands
While IT operations is not responsible for creating container images, they can expect to deploy, manage, monitor and orchestrate the containers that come from dev.
"The container image you developed in dev is the exact same one you use in production," said Marty Puranik, CEO of Atlantic.Net, a web hosting company based in Orlando, Fla., and that's the beauty of containers. Operations can use the same tools and images as developers, as well as other tools to help with deployment as needed.
All stages of environments should run the same versions of software and even hardware to avoid surprises and incompatibilities, Puranik said. Different versions support and drop feature sets and configuration formats, introduce syntax changes and make other changes that can have dramatic consequences. "A simple configuration format change between versions may leave you unable to even start your containers that previously worked great," he said.
"When moving between versions, at the very least, take the time to read the release note for breaking changes, and consult a compatibility matrix if the vendor is kind enough to release one, as Docker does," Puranik said.
Simplistic containers can be built manually if they are intended just for dev purposes. However, Puranik said, switch to automated builds for container images rolled for production use. It will provide consistent, repeatable images and save time.
Container build tools range from shell scripts to software, such as HashiCorp. Packer and Jenkins. Packer, which is open source, automates the container build process across multiple platforms, runs on all major OSes and is fairly easy to use, according to Puranik. Packer does not replace configuration management tools -- Ansible, Chef, Puppet or another -- and the user can work with configuration management tools when creating the image, he said. Jenkins is a continuous integration server; it is not specific to container image creation. "You can use Jenkins to pull code repositories that contain image creation scripts and then build automatically," Puranik explained. Jenkins can even utilize Packer to build images from configurations stored in code repositories.
Vulnerable ingredients in container images
New security vulnerabilities get discovered regularly, so the older a component of a container image is, the more likely that it is at risk, Red Hat's McCarty said. Development and operations teams must work together to ensure that the container image is easy to rebuild with new components any time a security vulnerability, bug or configuration error is discovered. Plan to do this at scale, with hundreds or thousands of container images.
Tools such as the following are appearing to help control container images:
- OpenShift Source-to-Image is an open source tool that developers use to build artifacts from source and inject them into Docker container images.
- Ansible Container is an open source project for Ansible users who need to build, run, test and deploy containers.
- Buildah, a command-line tool, simplifies how users create, build and update Open Containers Initiative-compliant images and containers.
Many tools are good at building containers once, but they need automation involved and consideration of dependencies when used to combat security vulnerabilities. For example, a build tool integrated into a continuous integration and continuous delivery (CI/CD) pipeline can be set up to always build applications with the latest certified base image from the operations team, McCarty said.
The threat of container sprawl
Container sprawl and configuration drift can quickly create nightmares in the rearchitect usage model. Featherston named three capabilities critical for control and governance: service discovery, orchestration and a well-defined CI/CD process.
"In the microservice world, you cannot be hard-coding location and connections to the various services used by your application, especially if you are frequently deploying new versions of the microservices," Featherston said. A service registry and key value store, such as HashiCorp. Consul or CoreOS Etcd, is a must.
Container orchestration tools, such as Kubernetes and Docker swarm mode, give IT operations teams a manageable way to scale up deployments based on the container images created by developers.
CI/CD -- and DevOps in general -- is more than just a set of tools; it is a cultural mindset and process shift that relies on more than new tools. "If you don't build new processes and get cultural buy-in, you will quickly lose control, and sprawl or drift will be a forgone conclusion," Featherston said.
Docker integrates native Kubernetes orchestration into Engine
AI and improved orchestration advance on the containerization hot list
Determine the right container orchestration tool for your cloud provider