Sergey Nivens - Fotolia

Manage Learn to apply best practices and optimize your operations.

Master microservices management on IT resources

Deploying and managing microservices is a hard, continuous process. Proper resource planning and automation tools help realize the benefits of cloud and virtualization.

In the dynamic world of componentized applications, resource and microservices management is a never-ending game.

The operational lifecycle of an application starts with deployment and is sustained by IT operations management. Virtualization makes the resource-to-application relationship explicitly dynamic, which encourages application designers to use microservices or composed services. IT teams must explicitly manage the binding of resources to application services and microservices, or risk compromising all the benefits of virtualization and cloud.

There are four distinct elements to managing resource binding and elasticity in virtualized, cloud and multicomponent or service applications:

  • The IT team must properly size the resource pool to accommodate the load it will handle.
  • They must automate deployment and redeployment of applications and components.
  • Application lifecycle management procedures must continually validate the applications against the resource pool when changes are made.
  • Finally, the team must manage the resource pool itself through orchestration.

Size the resource pool

In the planning phase, accurate sizing of the resource pool demands accurate information on application and component resource utilization. Base capacity planning on current application performance data. Performance monitoring tools are available from nearly all the major IT players, such as Microsoft; third-party sources, such as AppDynamics and BMC Software's TrueSight Pulse; and open source projects, such as Munin. The key to managing microservices' resource utilization is to get performance data on each component, not on applications as a whole.

It takes a specialized tool to turn performance management data into a capacity plan for a virtual or cloud resource pool. The most popular tools are linked to the virtual environment. VMware offers popular options for virtualization and cloud planning; the Microsoft Operations Framework is a good choice for Microsoft Windows Server and Azure users.

The biggest problem with capacity planning comes from failover and scaling. Rather than try to use statistics to predict what happens when spinning up a new component instance for recovery or load sharing, run some tests and get hard data. Determine just how many simultaneous failures to address and which components have to scale under load to ease the microservices management burden once apps are in production. Some of these planning answers lie in performance monitoring data -- failure rates, for example. If issues with application performance under load arise, try component scaling for that application.

Automate the deployment process

With a defined resource pool, the challenge becomes managing the resources and the relationship with applications' workloads. DevOps and infrastructure as code (IaC) target more effective resource and microservices management in a virtualized environment. DevOps enables IT teams to deploy applications and components and manage their lifecycles, and IaC aims to automate resource deployment and lifecycle management.

The emphasis that virtualization and the cloud have created on complex error-free automated deployment of applications drives a lot of interest in DevOps. There are two primary models of DevOps tools: The declarative model defines what a deployed application should look like, and the imperative model describes how to get there. Puppet and Chef are the most-used tools in these spaces, respectively. The former is favored by operations and the latter by developers, although both have adherents on either side of the IT wall.

Cooked or stuffed: Why Chef and Puppet attract different adherents

Chef is program-like in the way it works, which makes developers comfortable. They like the control Chef offers. Puppet is more operations-like because the user describes what they want the result to be, not how to get there.

Chef is evolving to support more end-state features, while Puppet's growing to allow for more custom definition of actions, so the two modes are probably converging. 

Check app behavior against the resource pool

Make capacity planning verification part of the application lifecycle management (ALM) processes. ALM brings about new application versions and configurations, which affect resource utilization. With ALM in mind, integrate DevOps tools into IT resource management. Without specific validation of the capacity plans when applications change, it's useless to manage the application-to-resource bindings.

If your organization doesn't already use performance monitoring tools routinely, consider adding them for ALM testing. Strive to understand the resource effect of all application changes, and, if necessary, to reflect them in the sizing of the resource pool or in the way that applications and components are deployed.

The other side of that binding -- the resources -- is the focus of infrastructure as code. If the resource pool is properly planned and managed against that plan by IaC, and if ALM has validated application changes, then it should be far easier to deploy and manage the bindings between applications and resources.

IaC is in its early stages as a recognized virtualization and microservices management practice. DevOps tools routinely offer some IaC capability, but these are often limited to commissioning, decommissioning and recommissioning of the resources in the pool. That means performance monitoring at the resource level, which can be done using standard platform tools, including those embedded in Linux OSes, is almost certainly necessary to maintain a view of resource utilization during operation.

IaC is evolving to integrate more tightly with the fast-changing pace of DevOps. For example, resource conditions can trigger a change in application component deployment. Achieving full automation of the resource binding and elasticity in complex component deployments will require this kind of event sharing. Explore this capability when evaluating DevOps tools for microservices management on virtualization and cloud platforms. Consider altering DevOps practices within the team to gain the maximum level of integration of resource state to deployment and redeployment.

Orchestrate resource provisioning

A final consideration for microservices management in the face of complex component, service and resource bindings is the march toward orchestration. Orchestration is a lifecycle-management-centric vision of DevOps and IaC that promises the ability to automate the whole process of resource and application control from top to bottom. Aspects of network function virtualization, the blueprint of orchestration, are already becoming visible in DevOps, IaC and application performance monitoring capabilities. Since these are already rolling into virtualization and cloud tools, the future for application operations looks brighter every day.

Next Steps

Open source monitoring tools provide powerful alternatives

IT pros size up Microsoft's microservices alternative

What you need to know about testing microservices

This was last published in July 2016

Dig Deeper on Matching IT Resources to Application Requirements

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What makes microservices management harder on owned servers rather than pay-as-you-go cloud resources?
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

DevOpsAgenda

Close