Moore's Law and advances in software technology have a way of making what was once unthinkable -- a database floating on a virtual machine -- into a reality.
Enterprise data centers routinely carve a single multicore server into 20, 40 or more VMs, which can alter the infrastructure design and IT application management decisions for legacy software builds. Resource-intensive, mission-critical workloads can run on VMs, thanks to this high-performance hardware and advanced virtualization management systems. Many organizations run the vast majority of workloads virtualized.
"The market has matured rapidly over the last few years, with many organizations having server virtualization rates that exceed 75%, illustrating the high level of penetration," according to Michael Warrilow, research director at Gartner. Core business apps -- enterprise resource management, customer relationship management, financial and human resources systems, structured data management and analytics software -- are workloads most likely to be deployed on inherently virtualized hyper-converged systems, found IDC analysts in a storage user demand survey.
Administrators can simplify and automate IT application management via portable, reusable VM images and orchestration systems, such as VMware vMotion, Citrix XenMotion, Scalr and Red Hat CloudForms. Cloudifying enterprise data centers breathes new life into these old but requisite applications, and it saves the IT department money.
Lower costs from easier operations
Sharing underutilized hardware across multiple applications reduces Capex investments -- a core tenet of virtualization -- while virtualization management software increases IT automation and decreases Opex investments. Organizations can save from 32% to 49% in Capex by moving from a traditional hardware-dependent data center to a fully virtualized and software automated one, according to a study by the Taneja Group.
Operational savings are harder to quantify, because they depend on an organization's existing processes, staff skills and application mix. However, consolidating system management and automating routine admin processes yields more efficient IT operations. Virtualizing legacy applications puts them on the same central management platform that IT uses for other applications. It provides a single console for monitoring resource usage, error logs, data backup and replication processes and security compliance.
Case studies show that organizations moving to cloud infrastructure -- a more elastic version of virtualization -- see double-digit decreases in Opex. For example, when Florida Crystals moved its SAP systems to a public cloud, it improved performance by an average of 30%, while saving 34% on compute costs and 47% on storage and backup. In another study, a small manufacturer expects this change in IT application management strategy to save up to 50% of overall IT administrative costs, with consolidated resource pools, reduced administrative and maintenance costs, and improved server and application availability.
Will it virtualize?
The hardware abstraction and system isolation provided by modern hypervisors mean that there is seldom any technical reason an application can't be virtualized. Public cloud offerings such as Amazon Web Services, Microsoft Azure and Google Cloud Platform all can run enterprise databases such as Oracle, SQL Server and SAP HANA on multi-tenant infrastructure.
The impediment to migrating legacy applications is often software support and licensing. For example, Oracle notoriously had an expensive and arcane licensing model that made it infeasible or extremely costly to operate the company's databases on VMs. However, as virtualization and public cloud matured, the company adapted to customer demands. Oracle fully supports the virtualization of Oracle databases on VMware's ESXi, for example, and VMware Support accepts accountability for any Oracle-related issues on the virtualization layer.
Similarly, the licensing models for most enterprise software offerings now recognize virtualized deployments and allow for workload migration within an infrastructure, as long as the user is licensed for each of the nodes in a particular cluster. Check each end-user licensing agreement for details, since some applications are more flexible and may allow movement to any system in the IT estate as long as the total number of running instances doesn't exceed the licensed amount. For example, Microsoft's Azure license rules now allow customers with Software Assurance to move workloads anywhere within a private cloud or to public Azure instances without purchasing new licenses or incurring a migration fee.
Not so fast
There remain certain IT application management scenarios where dedicated servers are preferred:
- Custom applications that require direct access to an accelerator card, I/O interface, memory or other custom hardware adapter.
- CPU- or I/O-intensive workloads that need predictable performance and should consume all the resources of a typical server under most conditions.
- Latency-sensitive applications such as those used for high-frequency trading, gaming, media processing and streaming.
- Those legacy applications that require a CPU instruction set and/or OS not supported by x86 hypervisors.
Even in these IT application cases, organizations are wise to take a page from cloud providers, such as SoftLayer and CenturyLink, which offer bare-metal servers by integrating server provisioning with an infrastructure management system. This enables the IT team to automate deployment and decommissioning, enable rapid, self-service provisioning and consolidate system monitoring.
How to choose data center automation tools
Navigating from legacy to the cloud
Unpredictable costs still irk public cloud users