Q
Manage Learn to apply best practices and optimize your operations.

Is IT workload management easier on software-defined infrastructure?

Our IT organization uses deployment automation, monitoring tools and virtualized resources. What do we need to do to manage IT workloads?

Application updates and changes in networking or storage infrastructure affect how data and processes interact...

with IT infrastructure. And in complex environments, any change -- even a seemingly unrelated one -- can have an effect across the deployment. A combination of virtualization and automation enables IT teams to redeploy large swaths of infrastructure with minimal effort, but IT workload management is still an ongoing adaptation and troubleshooting mission.

In theory, software-defined infrastructure is more predictable than hardware-tied setups. In practice, IT pros can experience wildly different results. Sophisticated systems are sensitive to dependencies that might only become apparent in production. Changes can creep in if the tooling is not in place to prevent it. Without an organized, comprehensive deployment methodology, drift occurs during upgrades. Ops teams should evaluate IT workload management tools and procedures, ranging from infrastructure and application performance monitoring to scaling technology, for modern infrastructure.

It's not always possible to pin down why something that shouldn't change does. Use configuration management tools to enforce policy and override ad hoc changes made on individual systems. An immutable infrastructure setup is another way to maintain consistency with IT workloads.

Manage workload through configuration

Immutable infrastructure is an IT infrastructure management concept where the ops team replaces or redeploys components -- such as VMs or containers -- rather than fix or update them. IT operations trends, like immutable infrastructure, aim to prevent configuration drift and similar issues by replacing components before they change. The concept is ridiculous for hardware -- imagine buying a new car because you got a flat tire -- but in software-defined IT, immutable infrastructure works.

IT ops should monitor not just the components of a deployment, but how they connect to each other and if inefficiency is introduced in these connections.

With immutable and software-defined infrastructure, IT workload management has changed, but that doesn't mean it is easier. Ideally, infrastructure utilization and performance only vary in relation to the workload. With monitoring in place, operations should be able to predict load and highlight problems based on historical trends. However, as applications scale up, their effect on infrastructure is not always predictable. Utilization might become inefficient. Ops can work to make its systems more efficient, but that can be a tricky balance. Often, the performance bottleneck simply shifts to another part of the data center.

Redeploying infrastructure components can alleviate general issues with OSes or hardware, but teams face more complex problems that involve multiple systems working together. IT ops should monitor not just the components of a deployment, but how they connect to each other and if inefficiency is introduced in these connections.

Developers have an important role to play in IT workload management as well. When software is corrupt or issues arise, will deploying a new VM or container to replace the current one solve the issue? Or is the application design resistant to scaling? Is a piece of the application code broken, in which case the developers must address the issue and then deploy again? Workload issues were not all made equally, and support teams must know the cause before they can fix it. Ops can reset a server's configuration countless times, but that will not stop a runaway line of code.

This was last published in April 2018

Dig Deeper on Scripting, Scheduling and IT Orchestration

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

2 comments

Send me notifications when other members comment.

Please create a username to comment.

Do emerging technologies, such as containers and microservices, make IT workload management easier or more difficult?
Cancel
Brian, you make some great points here. Workload "drift" (aka. changing workload behavior) is a major problem in virtualized environments. When one looks at the causes of outages and performance slowdowns, application coding and IT infrastructure are clearly key sources of many problems, but its the unforeseen changes in how applications are interacting with the infrastructure that exposes contention points in the infrastructure which are very hard to detect without the right monitoring tools. VirtualWisdom from Virtual Instruments is one of the few products designed to see such workload behavior problems and enable IT managers to proactively avoid such issues before users are affected.
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

TheServerSide.com

SearchCloudComputing

DevOpsAgenda

Close