BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Virtualization has made application hosting and server utilization easier than ever, but the simplicity ends there. A software-defined data center environment will pick up where virtualization left off.
IT personnel no longer need to justify purchasing new bare metal servers if they spin up a new service or application within the data center network. However, adding new services to a network has a tendency to cause a domino effect with regard to other areas.
How we provision new applications today
The traditional data center is siloed by function into computing, networking and storage.
One organization may have multiple Web servers, a Simple Mail Transfer Protocol (SMTP) server, and series of servers for the domain name system, each within the computing silo. Teams of system administrators who specialize in different types of servers work in this silo.
The storage silo has a considerable amount of overlap with the computing silo. For example, the Web server processes large amounts of monetary transactions on a daily basis, but rather than keep the purchasing information and other necessary files on the Web server, the organization offloads it to a storage device that the Web server queries as needed. This saves space on the server's hard drive.
Networking configuration, deployment and maintenance involves not only routing and switching infrastructure, but also the cabling and rack placement inherent in the networking discipline. All this falls to the networking team, another silo in the data center.
Today's siloed data center paradigm makes for a lot of manual processes, when, for example, the development team chooses to install a new Web application. A new Web server still must be accounted for, even if it's virtualized and deployed on an existing host machine, from the networking silo's perspective. Furthermore, all logs associated with the new application must be stored somewhere, most likely on a storage device, involving the storage team. How the logs are sent to the storage device will necessitate further involvement of the networking team.
Provisioning in a software-defined environment
Software-defined data center (SDDC) products abstract away many of these varying layers of complexity. A concept that is still in its relative infancy, a software-defined data center allows administrators to view all of the complexity within one interface. For example, Cisco and Hewlett-Packard Co. both developed products that, theoretically, can be dropped into an existing data center and create an overall view of what exactly exists from a networking, software and storage perspective. This is akin to a snapshot of the data center environment that can be taken at any given time.
To provision the development team's new Web server in a software-defined environment, the administrator would log into the SDDC tool's interface, implement the requested server and let the tool automatically handle all of the back-end work. This is a profound development that will no doubt have far-reaching implications if existing vendor products are deemed to be viable.
Am I out of a job?
There's a legitimate fear that destroying data center silos makes many existing jobs obsolete, but many IT leaders are unsure as to how accurate and reliable the new software-defined options are. Many administrators would prefer to wait for the technology to mature prior to fully committing to it in their data centers.
Data center personnel should stay cognizant of the latest software-defined product developments, and if possible, attempt to test vendor offerings within their current environment.
If software-defined products prove suitable for the data center, and reduce your team's manual workload, lobby for investing in SDDC. Fighting against back-end automation doesn't save data center personnel's jobs. Investigating new ways to accomplish tasks keeps staff up with the times, rather than a victim of them.