kiko - Fotolia
AUSTIN, Texas -- Blue chips offered fellow IT pros a blueprint to redeploy traditional applications via a Docker migration here this week.
Large enterprises, such as ADP, have seen good results from containerizing traditional applications, even when they're not refactored into microservices, according to presentations at DockerCon. Visa, on the other hand, had some advice for organizations that want to break down apps into their component parts. And MetLife got down and dirty with the operational dos and don'ts of the containerization process.
Organizations with monolithic apps can still containerize and convert to microservices at their own pace, argued James Ford, chief architect of strategy at HR software maker ADP, based in Roseland, N.J.
"If you encapsulate the monolith and only evolve the pieces that matter away from the application programming interface you expose, you can be working on the new functionality and let the old functionality die naturally," Ford said. "I think you get more out of going that route than starting by saying, 'Let's decompose everything.'"
Ford advised beginners on a Docker migration to walk before they run -- and crawl before they walk, if necessary. Today, ADP has 9,100 app images in its Docker registry, but it started small with a mobile app for tax resolution forms. It then slowly evolved to include apps running on virtual machines, bare-metal servers, Linux and Windows systems -- even mainframes. The company is still waiting for mainframes to be able to participate fully in Docker Datacenter-managed clusters, he said.
ADP also has containerized only the application and web server portions of its three-tiered applications, while databases remain on traditional VM and bare-metal infrastructure. And its Docker swarm clusters still remain on the small side; there are 28 clusters, though it would be ideal to run one big one, Ford said.
On the other hand, some companies, like Visa, pursue microservices as they containerize. The payments-processing company found the granularity of microservices is important to run containerized apps efficiently at scale.
"If your heap size is too big, you may not be getting the most out of microservices, because most of the infrastructure is now memory-bound," said Swami Kocherlakota, global head of infrastructure operations for Visa.
With this in mind, Visa moved a VM-based environment to containers running on bare metal and slashed the time to provision and decommission its first containerized app by 50%.
Like ADP, Visa began with one application group for Docker migration. It now has a blueprint to convert its apps and infrastructure to container-based deployments, so it will repeat the process in five more application groups, Kocherlakota said.
Despite the strong interest overall in containers, some users need more reassurance about the ultimate incentive to containerize applications, particularly on the operational side.
"It doesn't really make sense economically," said Joep Piscaer, CTO of OGD ict-diensten, an IT managed services provider and consulting firm in the Netherlands. "Why would I go to the effort of refactoring an app if it runs the same? It's a lot of work for essentially producing the same app." Not every enterprise has developers to benefit from the increased velocity containers can provide, he added, and certain apps, such as virtual desktop infrastructure, probably won't ever make sense for containers.
Piscaer conceded there is a benefit to independent software vendors (ISVs) releasing software into standardized registries for enterprises to deploy. "It's a decent answer to something VMware never could give an effective answer for: the issues companies may have around how to align with ISV development cycles and get new versions of software into production," he said.
Still, enterprise apps in production require a number of services from ISVs, from security to backup and disaster recovery, that are still works in progress in many cases at this stage.
"There is an ecosystem, and this will probably change as the ecosystem matures, but it's very fragile right now," Piscaer said. "It's just not a finished story yet."
MetLife digs into containerization details
Whether enterprises choose to deploy microservices right away or down the road, attention to detail as they convert to containers is paramount, said Tim Tyler, lead solutions engineer for MetLife, in a presentation at DockerCon.
"Sweat the little stuff," Tyler said. "I can't stress this one enough -- we were really lucky that we did, and we did early on."
Tim Tylerlead solutions engineer, MetLife
MetLife's team decided at the outset of its five-month containerization process to tag, annotate and label containers as clearly as possible, according to geographic location and business ownership for chargeback. This makes it easy to determine where a container runs and to whom it belongs.
The MetLife team also carefully considered how metadata is managed. As a result, IT ops can now see metadata, such as the expected count of containers per node, through the Universal Control Plane interface within Docker Datacenter -- crucial to maintain good performance and security in the environment.
"Our monolithic app is suddenly 50 individual bits of business logic floating around in this pool of resources, and we thought it was probably going to be hard to manage," Tyler said. "We had no idea just how hard it was going to be."
Test and fail to keep skills sharp
Companies that containerize apps should test as often as possible, Tyler said. MetLife created its own version of Netflix Chaos Monkey to purposely break its clusters and teach IT ops how to respond to problems in the environment.
MetLife developers even hold what Tyler called "war games" to keep ops teams' troubleshooting skills sharp. That's how MetLife learned important lessons, such as maintaining microservices affinities and anti-affinities to avoid duplicate processes on the same node.
MetLife also learned what not to do on a Docker migration. The company still struggles somewhat with governance around containerized apps and deciding who in the organization "owns" individual Dockerfiles and Compose files, as well as software artifacts and processes from a regulatory compliance perspective, Tyler said.
"If I had an opportunity to go back in time, I would say we should've applied more effort to this, earlier on," he said.
Take this Docker quiz to see how well you know the components array
Docker introduced persistent storage, changing the container game
Kubernetes multicloud helps Docker reach peak portability