everythingpossible - Fotolia

Monoliths-to-microservices move refactors IT ops skills

As enterprises seek speed and efficiency with microservices, IT ops pros must learn skills and consolidate tools to deal with increased infrastructure and application complexity.

IT ops skills must evolve to support refactored apps, as DevOps shops make the leap from monoliths to microservices.

Organizations can increase their software delivery speed with a move from Monolithic applications, in which a single infrastructure endpoint serves multiple functions and fields many network requests, to microservices -- a distributed set of multiple infrastructure endpoints, each of which serves single functions. Such an architecture can also improve resiliency and ease troubleshooting, as individual microservices can be rebuilt independently of other application services.

For IT ops pros, however, a move from monoliths to microservices also means an increase in IT management complexity that requires fresh thinking on how to deploy apps, manage networks and monitor performance, with tools that accommodate multiple programming languages and operating environments.

Increased complexity will soon affect organizations across the industry, said Michelle Bailey, an IDC analyst, in a keynote presentation at the firm's Directions conference in Boston last week. Over the next two years, enterprises can expect the number of applications under management to grow by 50%, and 49% of enterprises can expect higher numbers of application interdependencies, compared to 19% that deal with a high number of interdependencies today, Bailey's research showed. That's in today's reality, where each business application already has an average of four to eight dependencies on other services.

"As the number of services and languages in your environment grows, the complexity grows exponentially," said Zack Angelo, director of platform engineering at BigCommerce, an e-commerce software firm headquartered in Austin, Texas. "That's become the overarching premise of why we do the things that we do and why we've picked a lot of the technologies we have."

Monoliths to microservices: Feed the need for speed

Important benefits make microservices management headaches worthwhile -- namely, increased developer and team autonomy within the IT environment. Microservices applications can be created, rolled out and rolled back among distributed DevOps teams faster than monoliths delivered through centralized IT ops groups.

"Traditionally, as soon as something landed in production, the ops team participated a lot in the troubleshooting of the monolith," Angelo said. "But as we moved to services, which are not necessarily written in PHP, and they may be of a different architecture, that's no longer a scalable approach for us."

Instead, with microservices, BigCommerce builds its products with an optimum balance of autonomy and operability, in which each team owns specific areas of technology, Angelo said. That decreases the cognitive load and increases the focus for each individual team member, similar to the way the tasks required of each microservices endpoint are simplified and reduced.

Microservices also have some operational upsides, compared with monoliths. They trade complexity within a scaled-up application server for complexity across a network that connects multiple, simpler endpoints, which makes each individual endpoint easier to manage.

"A good example is the service that was our highest-throughput endpoint, with a couple of orders of magnitude more traffic than everything else," said Harrison Harnisch, a Chicago-based staff engineer for Buffer Inc., a social media management platform with a distributed workforce around the U.S. "With our monolith, we had to scale up all of our servers to meet that one endpoint's demand. But when we pulled that one out, we could scale it up as needed."

Isolating that endpoint also decreased the number of resources Buffer needed in the AWS Elastic Compute Cloud  for the application overall, which will shave off costs, Harnisch said. Generally, isolated microservices endpoints also limit the potential blast radius of security issues. Moreover, a move to microservices also means each service can be completely rebuilt in less than two weeks, including testing, with no impact to the rest of the application, he said.

Ernest Mueller, AlienVaultErnest Mueller

While app deployment tools in microservices environments must accommodate multiple app languages, this is an opportunity for IT ops to standardize application deployment, regardless of how they're written, said Ernest Mueller, director of engineering operations at AlienVault, an IT security firm based in San Mateo, Calif.

"When something's big and monolithic, it always ends up being a special snowflake. But as we split it off into microservices, each microservice becomes a thing you can just stamp out according to a pattern," Mueller said.

Microservices change IT capacity-planning skills emphasis

As with any architectural change in the enterprise IT environment, the move from monoliths to microservices comes with challenges and tradeoffs. IT ops pros must learn updated infrastructure management techniques and renew their emphasis on specific infrastructure management disciplines -- namely, network management and application performance monitoring.

At Buffer, Harnisch's team learned through trial and error in production how to correctly size the resources consumed by the high-traffic microservices endpoint once it was isolated. This process was compounded by the fact that the newly isolated endpoint, which ran code written in Node.js instead of PHP, was deployed into a relatively unfamiliar Kubernetes container cluster.

The ops team's estimates of the number of resources the app would need worked well until traffic reached about 50% of peak load, Harnisch recalled.

"If Node.js doesn't have enough memory, all those requests get queued up. And when they get queued up past a certain point, Kubernetes recognizes it's constrained [and] kills it; you lose all those requests, and it's not a good time," he said.

To determine if the problem was memory-related, Harnisch's team reset resource limits to more reasonable values, then slowly ramped up traffic until containers shut down. This testing informed the rollout of other microservices that are less traffic-heavy.

"Once we had an idea of how many replicas we needed of that container, we knew the amount of traffic it needed to serve, so we adjusted our limits with much more accuracy the next time we did it," Harnisch said.

This requires much more precise application performance monitoring than might have been necessary with monoliths that used internal connections between services to communicate, rather than sending traffic over an external network. It also means IT ops pros must pay more attention to network latency outliers that they safely ignored previously.

"As you start having to go through the network a lot more, the probability you're going to hit a latency spike becomes much greater, and so you have to worry about it a lot more," BigCommerce's Angelo said. "Your entire request, if it's composed of several services, will only be as fast as the longest individual network request."

Furthermore, each application language has its own operational challenges, Mueller said.

"For Python, it's library management; for Java, it's just giant and fat," he said. "We try to make sure our tooling just wraps around it, so you can use whatever you want."

Next Steps

Find out more about the tools of the microservices trade in part two of this story.

Should my company move its apps to microservices?

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close