skvoor - Fotolia

Is IT workload management easier on software-defined infrastructure?

Our IT organization uses deployment automation, monitoring tools and virtualized resources. What do we need to do to manage IT workloads?

Application updates and changes in networking or storage infrastructure affect how data and processes interact with IT infrastructure. And in complex environments, any change -- even a seemingly unrelated one -- can have an effect across the deployment. A combination of virtualization and automation enables IT teams to redeploy large swaths of infrastructure with minimal effort, but IT workload management is still an ongoing adaptation and troubleshooting mission.

In theory, software-defined infrastructure is more predictable than hardware-tied setups. In practice, IT pros can experience wildly different results. Sophisticated systems are sensitive to dependencies that might only become apparent in production. Changes can creep in if the tooling is not in place to prevent it. Without an organized, comprehensive deployment methodology, drift occurs during upgrades. Ops teams should evaluate IT workload management tools and procedures, ranging from infrastructure and application performance monitoring to scaling technology, for modern infrastructure.

It's not always possible to pin down why something that shouldn't change does. Use configuration management tools to enforce policy and override ad hoc changes made on individual systems. An immutable infrastructure setup is another way to maintain consistency with IT workloads.

Manage workload through configuration

Immutable infrastructure is an IT infrastructure management concept where the ops team replaces or redeploys components -- such as VMs or containers -- rather than fix or update them. IT operations trends, like immutable infrastructure, aim to prevent configuration drift and similar issues by replacing components before they change. The concept is ridiculous for hardware -- imagine buying a new car because you got a flat tire -- but in software-defined IT, immutable infrastructure works.

IT ops should monitor not just the components of a deployment, but how they connect to each other and if inefficiency is introduced in these connections.

With immutable and software-defined infrastructure, IT workload management has changed, but that doesn't mean it is easier. Ideally, infrastructure utilization and performance only vary in relation to the workload. With monitoring in place, operations should be able to predict load and highlight problems based on historical trends. However, as applications scale up, their effect on infrastructure is not always predictable. Utilization might become inefficient. Ops can work to make its systems more efficient, but that can be a tricky balance. Often, the performance bottleneck simply shifts to another part of the data center.

Redeploying infrastructure components can alleviate general issues with OSes or hardware, but teams face more complex problems that involve multiple systems working together. IT ops should monitor not just the components of a deployment, but how they connect to each other and if inefficiency is introduced in these connections.

Developers have an important role to play in IT workload management as well. When software is corrupt or issues arise, will deploying a new VM or container to replace the current one solve the issue? Or is the application design resistant to scaling? Is a piece of the application code broken, in which case the developers must address the issue and then deploy again? Workload issues were not all made equally, and support teams must know the cause before they can fix it. Ops can reset a server's configuration countless times, but that will not stop a runaway line of code.

Dig Deeper on Systems automation and orchestration

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close