Is there a real answer for how "software" can define "data center" underneath the software-defined hype?
Vendors bombard IT pros with the claim that whatever they are selling is a "software-defined solution." Each of these solutions claims to actually define what "software-defined" means in whatever category that vendor serves. It's all very clever individually, but doesn't make much sense collectively.
We suffered through something similar with cloud washing, in which every bit of IT magically became critical for every cloud adoption journey. But at least in all that cloudiness, there was some truth. We all at least think we know what cloud means. The future cloud is likely a hybrid in which most IT solutions still play a role. But this rush to claim the software-defined high ground is turning increasingly bizarre. Even though VMware seems to be leading the pack with their Software-Defined Data Center (SDDC) concept, no one seems to agree on what software-defined actually means. The term is in danger of becoming meaningless.
Defining 'software defined'
Before the phrase gets discredited completely, let's look at what it could mean, as with software-defined networking (SDN). In the networking space, the fundamental shift that SDN brought was to enable IT to dynamically and programmatically define and shape not only logical network layers, but also to manipulate the underlying physical network by remote controlling switches (and other components).
Once infrastructure becomes remotely programmable, essentially definable through software, it creates a new dynamic agility. No longer do networking changes bring whole systems to a grinding halt, manually moving cables and reconfiguring switches and host bus adapters one by one. Instead of an all-hands-on-deck-over-the-weekend effort to migrate from static state A to static state B, SDN enables networks to be effectively redefined remotely, on the fly.
This remote programmability brings third-party intelligence and optimization into the picture (a potential use for all that machine-generated big data you're piling up). Virtualization was a good first step toward data center agility, abstracting physical infrastructure into an aggregate resource pool from which workloads carve out logical chunks of capacity. With programmable capabilities that might reach into the resource's software or even physical instantiation, resources can be elastically configured in important architectural dimensions including connectivity, security, performance and data protection schemes.
Software-defined, or software-definable?
Many features that used to come embedded in hardware and firmware are now available as software, often running in virtual machine instances as virtual appliances.
The eventual drivers in our future software-defined world will be the applications.
Software emulations are common for internal test and dev, but with abundant cheap CPU and memory available from commodity servers and increasingly effective hypervisors, even software-based systems have become performant, reliable and cost-efficient enough for production usage. For example, traditional storage arrays previously sold only as hardware kits were really just software programs running on OEM generic servers. With a bit of repackaging, they now also ship as virtualized arrays. Of course, products written specifically for the virtual environment are designed to perform better, as HP's StoreVirtual VSA, EMC's ScaleIO, VMware's Virtual SAN and others are proving.
VMware's take on software-defined tends to mean that all that critical non-server IT infrastructure (i.e., network, storage and security) could be implemented in software, with lots of ensuing benefits to efficiency, automation, agility and service quality. In a popular version of SDDC, a complete data center could be spun up with all necessary resources implemented virtually in software, hosted solely in a virtualized compute environment.
Simply being written in software shouldn't qualify as "software-defined"; the term should also apply to the overall resource served (e.g., networking or storage). Just as there are network switches for SDN, appropriately designed hardware and firmware solutions should exist for software-definable infrastructure. In other words, a well-designed physically assembled pool of modular (possibly proprietary and/or highly specialized) resource units could be elastically provisioned, dynamically partitioned and configured programmatically.
Along these lines, VMware appears to be aiming at a more nuanced software-defined future in which traditional resources would just be fronted by a software-defined management control plane (like the anticipated Virtual Volume for legacy storage arrays).
In contrast, there are software products that don't have the open, remote programmability to contribute to a software-defined data center vision. If a given solution can't be composed into a larger software-defined ecosystem, or optimized by a third-party intelligence with a broader perspective, then it really is more of the same old stuff.
Software-defined, not converged, scattered, smothered and chopped
Interestingly, many software-implemented vendors admit to shipping more hardware-based appliance versions of their products than they do straight software licenses. Converged versions are simply pre-installed on certified hardware, much like the traditional hardware solutions they replaced. Many folks still just want to open a box and plug in something that is already working rather than mess with integration. Or maybe for their money they still want racks of glowing vendor-specific faceplate bezels to illuminate data center tours. Fundamentally, software-defined solutions need to be remotely and dynamically programmable. That begs the question, who (or what) will do the programming? It could be convenient for IT to provision a whole data center's worth of infrastructure on the fly, but how many on-demand production data centers do we expect an IT group to create in a month? The bigger, ongoing payback will be in dynamically optimizing the infrastructure that each application might require: The eventual drivers in our future software-defined world will be the applications.
In fact, DevOps may just be a passing phase until infrastructure provisioning, configuration and optimization are controlled directly by applications. This would complete a circle started when code was first tightly coupled to infrastructure, then slowly abstracted away through higher-level languages, operating systems and virtualization. Software-defined might bring it all back around through application infrastructure-awareness and self-directed management. The software in "software-defined" may be your applications taking a direct vested interest in how they are hosted.
What it means in today's data center
I've heard from various folks that software-defined solutions need to have sets of often conflicting qualities like simplicity (do one thing), agility (do many things), have open APIs (on a separate control plane) for external management, be completely self-managed, be managed through set policies, get cheaper as they scale, save money through elasticity, stretch investments through maximum utilization, accelerate opportunities through maximizing utility, be virtually agnostic of platform, be built explicitly on commodity hardware, embed specialized capabilities, come pre-converged, look homogeneous, deploy on bare-metal, be cloud-friendly, be customizable and a few other naughty-sounding words.
It is all terribly confusing. The bottom line is: If you keep an eye toward consistently increasing the automatability of your infrastructure, you should be on the right track to redefine data center resources, no matter how software-defined is eventually defined.
About the author
Mike Matchett is a senior analyst and consultant at Taneja Group.