alex_aldo - Fotolia
Microservices have emerged as a software design pattern that breaks large applications into suites of loosely coupled components. To enable faster deployment, better scalability and a more agile development process, companies such as Nike and Netflix have adopted these principles. But operations teams face a host of new management challenges to keep microservices infrastructure running smoothly, from service discovery to release automation.
"We knew that we wanted to optimize for scale without over-architecting too early," said John Sheehan, CEO of Runscope, an API monitoring service.
The company began with a few key services broken down by functions -- manage identity and store test data, among others -- to ship small, and iterate. Runscope now has more than 50 internal services of varying sizes and averaged over 30 deploys per day in 2014.
"As individual services required more capacity, we were able to independently scale them without having to allocate resources to the entire cluster," Sheehan said. "We are also able to independently deploy each service more quickly."
To make this work, Runscope invested significantly in automation, deployment and realm management tools, as well as libraries and frameworks for app developers to build and consume services consistently. Without its investment, the overhead of managing those services may have outweighed the benefits.
"If you're willing to invest in infrastructure, the benefits you get from small, reusable services can help you achieve significant ROI," Sheehan said.
Scenarios drive microservices
Enterprises have many reasons to adopt microservices, said Roman Iuvshin, lead DevOps engineer at Codenvy, a cloud-based integrated development platform that uses microservices.
1. A small group of people can self-maintain the service with few or no couplings to other services. Microservice teams can deploy and develop on their own, and own the results.
2. The client application must perform at maximum speeds. Microservices allow Codenvy to perform different developer tasks on different clusters, even though it feels like accessing a single virtual machine. The net impact is less thrashing and blocking, and a more seamless experience.
3. The skills of the technology teams that own different components vary. Codenvy has specialists in big data, distributed systems and Web development who are not always on the same team. These specialists can build microservices in a stack that is optimized for their interests, skill sets and the needs of the service itself.
New operational concerns
With any new technology, there is a tradeoff. Microservices introduce a network-level separation of concerns, which causes a whole new set of problems, including latency issues and network unavailability, Sheehan said. Operations teams must look at more than how any given service performs, and understand the combined picture of services that make up app performance. This is why API monitoring and testing is so critical.
All of these little parts make up the application experience.
"If you don't look at the performance of how all of these parts are interacting, you'll miss out on a significant amount of operational data that will help you run better applications," Sheehan said. "Because of the additional variables the network introduces, you have to pay attention to another class of problems that likely weren't tracked as closely before."
Monitoring and testing of these pieces are essential to solve these problems.
Microservices also allow every service to run its own technology stack. Operations teams must be aware of how to operationalize and maintain different technology stacks that are the right fit for any service.
Rethink the release process
A microservices approach decomposes monolithic applications into atomic microservices silos, and divides the software work effort across multiple, loosely coupled teams.
"Teams following a microservice architecture approach must scale up software-release management processes to address service dependencies, network distribution and autonomous release schedules," said Chris Haddad, platform evangelist at WSO2, a SOA middleware provider.
End-user applications will often use multiple microservices, such as product catalog, user profile and inventory. Additionally, microservices may interact with other microservices. Distributing microservices across multiple teams and across a distributed network topology introduces release challenges.
"Successful teams will introduce service versioning, release testing, incremental upgrade and release rollback into their software release process playbook," Haddad said.
Operations personnel should establish processes to perform an incremental upgrade. This type of upgrade means deploying a new microservices version alongside the last version, and then incrementally dialing up traffic to the new version.
An incremental upgrade partitions effects to a user base subset, and it enables the team to perform a smoke test in the live production environment. When a newly deployed microservices fails or delivers a poor user experience -- based on A/B test analysis -- teams should safely roll back the microservices release. Teams should incorporate a safe and sane rollback capability into their release management process.
About the author:
George Lawton has written over 3,000 technology news stories over the last 20 years. He lives in the San Francisco Bay area. You can reach him directly at firstname.lastname@example.org or follow him on Twitter @glawton.
Dig Deeper on Deploying Microservices
Weigh cost, flexibility in your Java cloud IDE comparisons
Red Hat expands its container strategy with Codenvy acquisition
Get creative and use the cloud for faster software development
IT talent management systems stuck in the 'Mad Men' era