Let application architecture and design take the stage in modern IT

As the software-defined data center and DevOps methodologies hog the limelight, don't let application infrastructure design fall into the shadows.

Enterprise IT consists of a skilled team of sys admins, network engineers and others -- all working to keep business-critical...

systems up and running. These efforts keep businesses agile and competitive, but often, move focus away from application architecture and design.

Modern systems are complex; the premise that software or applications will just work often means that little to no attention is paid to architecture and design. This lack of infrastructure design and engineering is one of the problems with the modern software-focused data center.

Application architecture and design have been pushed out of IT's purview as the software-defined data center and DevOps take center stage. This presents a real problem for application infrastructure design. Before virtualization took over the data center, IT teams had to design systems for the business because the hardware they ran on was expensive. But, because VMs and those types of resources are free, many administrators shifted focus from design to deployment and operational phases.

With virtualization, the turnaround time for server application delivery went from weeks to minutes.

In the early days of virtualization, resources were difficult to work with and expensive. An IT administrator would make inquiries about the application's requirements and specifications to determine which resources were necessary. And an application owner might overallocate resources, such as space, by asking for more than he needed. That has changed; administrators now must simply keep the application running.

With virtualization, the turnaround time for server application delivery went from weeks to minutes. The time needed to evaluate new application architecture and design simply took a backseat -- and this causes problems that aren't limited to application setup. If IT staff can't do the work on the front end, it can lead to wasted resources on the back end. If research into what the application needs in the real world, under true operating conditions, doesn't occur in testing -- or at least in simulations -- the business gambles on what should work and not what will work based on hard numbers and performance data.

This could lead IT teams to insufficiently allocate resources for an application or over-allocate resources, which creates waste. Those wasted resources tend to come up in discussions later, when the cost of replacing an ever-expanding application infrastructure causes sticker shock. The result is a rush to clean up and recover resources after the fact, which is often time-consuming and can cause an outage if done incorrectly.

Plan now to save yourself headaches later

Build in a specific period of time between the request for new infrastructure and deploying it -- and involve management. The request process should accommodate application requirement planning, evaluation and assessment. And ask questions about why the infrastructure is needed and how it should be designed. This is not an approval process; it's a design process that removes potential clean up from the back end.

Keep in mind: This will increase lead times for resources. But the business side must accept it and champion it, otherwise people will continue to request resources on tight schedules that skip the important application architecture and design process. And that hurts the business in the long run.

Next Steps

A guide for building virtual infrastructure

Follow an IT professional's journey through the industry

Software-defined data centers create a bevy of IT challenges

Dig Deeper on Matching IT Resources to Application Requirements