Software-defined storage market: How and where the technology fits

Software-defined storage market: How and where the technology fits

Date: Dec 20, 2013

The software-defined storage market is growing, but will the hype last? According to Randy Kerns, a partner at Boulder, Colo.-based Evaluator Group Inc., software-defined storage can spread storage features such as snapshots or RAID across multiple arrays, giving it a big value proposition. And while Kerns says there's still a demand for traditional storage arrays, the concept of software-defined storage is here to stay. In this TechTalk from TechTarget's Storage Decisions conference in New York, Kerns delves into why he believes the software-defined storage market is evolving, and what it means for other technologies in the storage world.

Do software-defined storage systems provide all the features that you typically get with storage arrays like RAID, snapshots and replication?

Randy Kerns: Let's break this apart. Data services like snapshots, replication, things like that, they're really about data protection, and those are services that in the storage system have been proven to be very valuable, and the advent of those came out in storage systems and you typically paid extra for them.

Each of those, in a different storage system, was done differently and required special software, special management for them. The software-defined storage or virtualization is being laid on top, and those data services functions are typically being abstracted independent of which elements are behind there. So now I can administer them and control them more generically rather than specifically to each type, and it may not be an extra charge item that I have to deal with in every array.

Putting that in devalues that function in the storage system to a great extent. It devalues it, homogenizes it, and it gives you the opportunity to automate it. Now the RAID function is a device protection function, and RAID itself is done to protect from a failing drive using more commodity-type disk drives.

RAID inherently created some opportunities with performance issues and now when we start talking about larger capacity elements, the RAID function tends to be less valuable because of the probability of a secondary failure while you're recovering from one. The vendors tried to compensate for that with RAID 6, if you will, to protect drives. What we're seeing now is a change because of organizational differences to use information dispersal algorithms and forward error correction.

Most of the forward error corrections are being done with the erasure codes, but there are a number of different types. Unfortunately, some people are calling it erasure code and that's not accurate. It's really a forward error correction with information dispersal, and that gives you two opportunities. It gives you a way to protect from individual element failures, and it also gives you the ability to do geographic dispersion of data and then protect data by bringing it back from different geographies. That way, it becomes a greater scaling opportunity, much more flexible, and that's the way we're proceeding in the storage world. I would separate the two things -- the data services and the data protection -- into different areas.

There seems to be more emphasis lately on network and server-based storage, and the software-defined storage market seems to play into that. Is there any place left for traditional storage systems?

Kerns: Yes, and what you're referring to has been primarily focused in smaller businesses and remote offices that can't afford a large network-attached storage system. The idea is that, 'Hey, I have a server and have so many disks -- let's federate different servers together.' What that does is add another layer of overhead and software.

So it doesn't have the same potential performance as an embedded storage system. If you're doing file sharing or volume sharing across those, then I've got another whole set of lock mechanisms and things that go over your standard network. You become a slave to that network to be able to access data in the sharing environment whereas a traditional storage array has that sharing functionality built into it and has much higher performance.

It's a matter of scale in the smaller environments. It's a little bit of a cheaper way to get into things. Larger environments don't have the performance impact and the complexity impacts that you may have from implementing those federation environments.

Is software-defined storage something real that will have a lasting presence in the marketplace, or is it just a fad that was brought on by server virtualization?

Kerns: I think it's a label on an evolutionary aspect that we've been going through, and I think that label is as good as any. I think it started with the phrase software-defined data center and then, 'Oh, that's a good term.'

Now you have software-defined networking and software-defined storage and I've heard other adventuresome marketing people doing software-defined everything. It's been quite the joke for the most part, so it's a label. But the concept of abstracting things and managing at different levels and dealing with automation and scale, I think it's going to continue. It's just a matter of how it's done.

More on Virtualization Strategy

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: