Predicting the storage issues businesses will face in 2014

VDI
Will VDI take off in 2014? (credit: iStock-Spectral-Design)

Now that 2013 is receding in the rear view mirror, it's probably a good time to map out some of the things we're likely to see in the world of enterprise storage as we progress further into this year.

We speak to Kieran Harty, CTO and co-founder at Tintri, on some of the most pressing themes and challenges as we enter 2014.

TechRadar Pro: What affect will flash have on the market in 2014?

KH: There is very little doubt that flash will make serious inroads and intelligent software will become more important in the storage market in 2014. Most of our customers will be exploiting the benefits of flash performance in an economic way via hybrid storage systems.

The focus will extend beyond flash hardware to intelligent software on top of flash to drive seamless integration with the virtualisation and application layers, allowing administrators to focus on managing VMs and application data, rather than just storage.

TRP: Will VDI finally lift off?

KH: Although Virtual Desktop Infrastructure (VDI) has been talked about for many years, only about 10 percent of desktops have been virtualised to date. The availability of cost effective storage and greater adoption of mobile devices will help to increase this figure and VDI implementation will accelerate in 2014, especially in verticals such as Financial Services, Healthcare, Government and Education.

TRP: To cloud or not to cloud?

KH: While a lot of attention has been devoted to public cloud offerings from Amazon, Google and Microsoft, private clouds (dedicated clouds at customer sites or at service providers) have been growing at a fast rate in the background.

For many companies, a private cloud is the next phase of their adoption of virtualisation, providing an environment that delivers additional flexibility, self-service and reporting. Unlike public cloud environments like Amazon Web Services, applications don't need to be rewritten to handle failures. Private cloud adoption is also expected to accelerate in production and development/test environments.

TRP: What can customers do in 2014 to be more efficient with their storage resources?

KH: With increased consolidation of physical infrastructure and the move to virtualised environments, more applications are sharing the same server, storage and networking resources. But sharing resources can lead to poor or unpredictable levels of service for individual applications.

While the hypervisor provides efficient Quality of Service (QoS) mechanisms for computing, the practice of over-provisioning to approximate QoS for storage and networking will become a problem for customers concerned they are wasting resources. As a result, enterprise customers and service providers will place a major focus on QoS mechanisms for storage and networking this year.

TRP: What can we expect from storage and networking in 2014?

KH: With the efficiency of computing improving by an order of magnitude, making managing virtual machines a lot simpler than physical machines, the number of virtualised workloads will rise to more than 70 percent in 2014. While storage and networking haven't seen the same benefits as computing, the situation is changing.

The big drivers for IT will be improving storage efficiency, simplifying storage management for virtualised environments and improving the automation of data management. Application-centric storage will help to reduce Capex and management costs significantly. In the networking world, software defined networking (SDN) – conceptually similar to application-centric storage – will work in parallel to increase efficiency and reduce complexity.

TRP: How will virtualisation challenge storage in 2014?

KH: Virtualisation has a problem. It's raced ahead of the underlying datacenter infrastructure, and now the industry is trying to adapt. Many companies we speak with have aggressive plans to virtualise nearly all of their servers within the next year or two, but legacy storage makes this an increasingly expensive and complex proposition. Companies want to go all-in on virtualisation, but at what cost?

As workloads have changed, traditional datacenter infrastructure needs to adapt and become more aligned with the new requirements. Today's general-purpose storage systems are based on decades-old architectures, physical servers and applications are being virtualised, and mechanical disk drives are being replaced by SSDs.

The I/O patterns change significantly, and also become obscured by virtualisation. General purpose storage systems are based on LUNs or volumes, which map poorly to VMs. This creates three fundamental, interrelated problems of overprovisioning, complexity and performance management.

TRP: Why should organisations become VM-Aware? And what does that entail?

KH: Virtualisation demands a different kind of storage. It needs storage that understands the IO patterns of a virtual environment and automatically manages quality of service (QoS) for each VM, not LUN or volume. Operating at a VM level also enables data management operations to occur all the way down to a specific application. This is what defines VM-Aware storage.

There are different ways to approach the problem with varying degrees of "VM-Awareness." VMware understands the problems with legacy storage and is working to make existing general purpose storage more VM-Aware, but ultimately, this will be a slow incremental process that adds additional layers of abstraction, complexity and inefficiency to decades-old storage architectures designed before the advent of virtual machines and flash storage technology. We believe that the best way to support virtualisation is from the ground-up – let the software define the storage.

Desire Athow
Managing Editor, TechRadar Pro

Désiré has been musing and writing about technology during a career spanning four decades. He dabbled in website builders and web hosting when DHTML and frames were in vogue and started narrating about the impact of technology on society just before the start of the Y2K hysteria at the turn of the last millennium.