JumalaSika ltd - Fotolia

Scale-out architecture and new data protection capabilities in 2016

What are the next big things for the data center in 2016? Applications will pilot the course to better data protection and demand more resources from scale-out architecture.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Windows workloads leap to cloud:

January was a time to make obvious predictions and short-lived resolutions. Now is the time for intelligent analysis of the shark-infested waters of high tech. The new year is an auspicious time for new startups to come out of the shadows. But what is just shiny and new, and what will really impact data centers?

From application-focused resource management to scale-out architecture, here are a few emerging trends  that will surely impact the data center.

Data protection awakens

It's the return (or long-awaited chapter 7) of data protection software that can reduce or eliminate most business continuity risk. In this time of petabyte-scale data lakes, proliferating databases and global 24x7 operations, it's no longer sufficient to hold onto the full-backup-to-tape routine. The backup window is shrinking -- even disappearing -- thanks to applications that can't take as much or any time off.  Data is getting too big to dump in one massive image and restore from traditional backups. Go ahead, try to restore a Hadoop cluster from backups. In 2016, look for new data protection capabilities like those offered by Talena that directly address big data stores, direct array features like HPE 3PAR's "flat backup" that steers snapshots directly into StoreOnce, and Oracle's Zero Data Loss Recovery Appliance that makes protecting all those big, 24x7 Oracle databases dead simple.

Scale-out convergence

Scale-up monolithic architectures are so 2005. Now we have distributed, scale-out, parallel networked designs for everything including servers (cloud/virtualization/big data), processes (container-based microservices), storage (billion file object stores), and even memory grids. The problem is that most of our applications can't take advantage of the resources of scale-out architecture. They tend to like simple, centralized resources. The good news is that there are increasing layers of support that can map legacy applications onto newer IT architectures. For example, parallel file systems such as Lustre and GPFS are maturing into enterprise data center storage options (e.g. IBM Spectrum Scale). Containers are increasingly able to host large and stateful applications including databases, as well as software-defined resources like storage). New object stores capable of holding billions of objects (e.g. Qumulo) blur the line between what used to be tier 2 archives and today's data-aware, almost tier-1 object storage.

This shift to scale-out architecture isn't necessarily in literal hardware terms, but should be considered at every level of your stack. Consider the mainframe. Today's mainframe is technically an awesomely large hyper-converged container host -- the biggest iron running massive numbers of the littlest workloads.  In this new world, think about which layers are best optimized as aggregated pools, which as a centralized consolidated resource, and which as truly distributed "internet of things" style nodes or hybrid cloud mash-ups. Only then will it be possible to make that completely opaque to applications and allow IT to optimize application hosting dynamically.

Applications get to drive

Whether you believe in human DevOps, autonomic IT infrastructure or the invisible intelligence inside your cloud provider, applications need to dictate the resources and QoS that they need -- and they will do so more dynamically. Infrastructure is already getting smarter about responding to dynamic changes in per-application QoS. VMware provides some great examples of dynamically flexible infrastructure with software-defined resources like NSX networking and Virtual SAN storage that applies QoS policies on a per-VM basis. 

Containers also promise to define the resources that each component of an application dynamically requires. Of course, what's still missing here is the meta-intelligence to help applications assure their own service levels. Look out for big data based management systems that combine infrastructure and application performance management views for active operational guidance.

Lifetime data value curve lengthens

The value of data used to drop off as it aged, but with active archives and big data, most data offers value long past its active operational use. Data lakes, built-in analytics and cost-effective scale-out storage are changing the fundamental data value curve. This is the year to take look for the hidden value in your data. Don't forget to consider new internal (e.g. internet of things) and external (partner, supply chain, 3rd party) data sources in combination.

Each of these trends deserves some deeper examination -- and an open mind.  While it's tempting to stick with what we know, it pays to stay open to new ideas. Some folks might question how smart our infrastructure needs to be, but the real question is how smart can it be?

Next Steps

How data center infrastructure impacts big data

How do OpenDaylight and others compare to NSX?

Housing assignments for new business applications

This was first published in February 2016

Dig Deeper on Real-Time Performance Monitoring and Management

PRO+

Content

Find more PRO+ content and other member only offers, here.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

Close