Sergey Galushko - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Will container virtualization be the biggest data center trend of 2016?

With the rapid maturation of container virtualization, IT administrators should be prepared to adopt and manage containers earlier than they expected.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: The 2016 MI Impact Awards:

It's hard to predict what the biggest thing to hit the data center will be in 2016. Big data? Hyper-convergence? Hybrid cloud? I've decided that this is the year that containers will arrive in a big way -- much earlier and faster than many expect, catching unprepared IT shops by surprise.

Unlike other technologies like big data that require vision and forward investment, containers are a natural next step for application packaging, deployment and hosting that don't require massive shifts in mindset or vision. It's just quicker and easier to develop and deploy an application in a container than it is to build a virtual appliance. Containerized architectures also have the compelling operational and financial benefits of cheaper or free licensing, more efficient use of physical resources, better scalability and ultimately service reliability. Looking ahead, container virtualization will help organizations take better advantage of hybrid or cross-cloud environments.

Server virtualization was also a great idea when it first came out with significant advantages over physical hosting, but it still took many years for it to mature (remember how long it was before anyone hosted an important database in a VM?). The same has been true for private or hybrid clouds, new storage technologies and even big data. But even though container virtualization  is just out of the gate, it has gotten farther down the maturity road by leveraging the roadmap laid out by server virtualization. And you can get a jumpstart by using trusted hypervisors like VMware vSphere Integrated Containers to shepherd in containers while the native container world polishes up its rougher edges. Because containers are sleeker and slimmer than VMs (they are essentially just processes), they will slip into the data center even if IT isn't looking or paying attention (and even if IT doesn't want them yet).

Containers were originally created to host stateless microservices-based application layers, but the latest Docker releases show that containers are destined to host far more than microservices. For example, with Flocker plug-ins providing persistent storage you can immediately containerize just about any application. Stir in one of several software defined networking options and you have a scale-out container-ship! As a result there are already big data, relational databases and software defined storage solutions running as containers.

And unlike with hyper-converged architectures, containers' fundamentally fluid design means nothing is really going to lock you in. In fact, if you run SDS and SDN and containers, you might claim to be "super" hyper-converged.

It will take years before we containerize everything, and there are some thorny challenges yet to work through before container architectures become fully general-purpose platforms, including how to guarantee application performance service levels when an application consists of multiple (tens, thousands?) of containers, many of which might be shared with other apps in a complex web of dependencies. We'll need new management solutions to visualize where problems or contention have crept in, as well as big-data-powered predictive automation to remediate issues and optimize performance and cost. And because containerized applications are extremely fluid -- they can easily migrate across physical, virtual, and cloud servers -- we'll need tools to dynamically arbitrate and migrate containers across infrastructures.

When will all this happen? I predict that IT organizations will likely be supporting some kind of container "ship" in production within the next six months. Vendors are already racing to see who can put together the best converged container "distro" and hyper-converged scale-out platform to support them. Application vendors are quickly rolling out containerized versions of their wares. Container virtualization is coming quickly!

Container architectures might help IT worry less on what's in a container, and concentrate more on running the best possible "ship." But like with server virtualization, eventually you will want to map across the container "abstraction" from application through to infrastructure. To get there, start looking for ways to gain the total visibility that you will need for troubleshooting, resource planning and service assurance in the impending containerized data center

This was last published in January 2016

Dig Deeper on IT Ops Implications of Continuous Delivery

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

It’s certainly a contender. Another contender is Amazon’s EFS (if it ever makes it out of preview).
Cancel
Hi mcorum.

EFS certainly could/would be a disruptive cloud storage service - maybe killing off Netapp and EMC entirely. But when I think through the fragility of file systems at the massive size, scale, and multi-tenancy that AWS would need to support in production I suspect that EFS is likely still missing some really difficult bits.

Maybe I'm grumpy because I spent hours this weekend recovering from a failed external drive on our house Mac while trying to finish off our taxes. EFS would be a lovely service...
Cancel

-ADS BY GOOGLE

SearchDataCenter

SearchAWS

SearchServerVirtualization

SearchCloudApplications

SearchCloudComputing

DevOpsAgenda

Close