Software-defined storage market: How and where the technology fits

Software-defined storage market: How and where the technology fits

Software-defined storage market: How and where the technology fits

Date: Dec 20, 2013

The software-defined storage market is growing, but will the hype last? According to Randy Kerns, a partner at Boulder, Colo.-based Evaluator Group Inc., software-defined storage can spread storage features such as snapshots or RAID across multiple arrays, giving it a big value proposition. And while Kerns says there's still a demand for traditional storage arrays, the concept of software-defined storage is here to stay. In this TechTalk from TechTarget's Storage Decisions conference in New York, Kerns delves into why he believes the software-defined storage market is evolving, and what it means for other technologies in the storage world.

Do software-defined storage systems provide all the features that you typically get with storage arrays like RAID, snapshots and replication?

Randy Kerns: Let's break this apart. Data services like snapshots, replication, things like that, they're really about data protection, and those are services that in the storage system have been proven to be very valuable, and the advent of those came out in storage systems and you typically paid extra for them.

Each of those, in a different storage system, was done differently and required special software, special management for them. The software-defined storage or virtualization is being laid on top, and those data services functions are typically being abstracted independent of which elements are behind there. So now I can administer them and control them more generically rather than specifically to each type, and it may not be an extra charge item that I have to deal with in every array.

Putting that in devalues that function in the storage system to a great extent. It devalues it, homogenizes it, and it gives you the opportunity to automate it. Now the RAID function is a device protection function, and RAID itself is done to protect from a failing drive using more commodity-type disk drives.

RAID inherently created some opportunities with performance issues and now when we start talking about larger capacity elements, the RAID function tends to be less valuable because of the probability of a secondary failure while you're recovering from one. The vendors tried to compensate for that with RAID 6, if you will, to protect drives. What we're seeing now is a change because of organizational differences to use information dispersal algorithms and forward error correction.

Most of the forward error corrections are being done with the erasure codes, but there are a number of different types. Unfortunately, some people are calling it erasure code and that's not accurate. It's really a forward error correction with information dispersal, and that gives you two opportunities. It gives you a way to protect from individual element failures, and it also gives you the ability to do geographic dispersion of data and then protect data by bringing it back from different geographies. That way, it becomes a greater scaling opportunity, much more flexible, and that's the way we're proceeding in the storage world. I would separate the two things -- the data services and the data protection -- into different areas.

There seems to be more emphasis lately on network and server-based storage, and the software-defined storage market seems to play into that. Is there any place left for traditional storage systems?

Kerns: Yes, and what you're referring to has been primarily focused in smaller businesses and remote offices that can't afford a large network-attached storage system. The idea is that, 'Hey, I have a server and have so many disks -- let's federate different servers together.' What that does is add another layer of overhead and software.

So it doesn't have the same potential performance as an embedded storage system. If you're doing file sharing or volume sharing across those, then I've got another whole set of lock mechanisms and things that go over your standard network. You become a slave to that network to be able to access data in the sharing environment whereas a traditional storage array has that sharing functionality built into it and has much higher performance.

It's a matter of scale in the smaller environments. It's a little bit of a cheaper way to get into things. Larger environments don't have the performance impact and the complexity impacts that you may have from implementing those federation environments.

Is software-defined storage something real that will have a lasting presence in the marketplace, or is it just a fad that was brought on by server virtualization?

Kerns: I think it's a label on an evolutionary aspect that we've been going through, and I think that label is as good as any. I think it started with the phrase software-defined data center and then, 'Oh, that's a good term.'

Now you have software-defined networking and software-defined storage and I've heard other adventuresome marketing people doing software-defined everything. It's been quite the joke for the most part, so it's a label. But the concept of abstracting things and managing at different levels and dealing with automation and scale, I think it's going to continue. It's just a matter of how it's done.

More on Virtualization Strategy

  • canderson

    Putting too much stock in VDI IOPS could lead to project failure

    VIDEO - It's important to provide enough IOPS for users to complete tasks without latency in VDI environments, but according to Brian Madden, trying to predict how many IOPS is arbitrary.
  • canderson

    Madden: Non-persistent VDI is getting easier

    VIDEO - Brian Madden was once a stark proponent of persistent VDI, but he says that opinion is changing thanks to improvements in technology and the way users are accessing applications.
  • canderson

    Top features to boost VMware storage performance

    VIDEO - In virtual environments, one VM can negatively impact the performance of others, but using the right resource allocation features can help.
  • The hyper-converged infrastructure bandwagon picks up speed

    Opinion - Hyper-converged systems are quickly gaining attention -- and sales -- because they offer many advantages over traditional storage products.
  • PernixData FVP version 2.0

    Feature - Version 2.0 of PernixData's FVP wins the 2014 Storage Products of the Year competition in the storage system software category.
  • Atlantis USX 2.0

    Feature - Atlantis Computing's USX 2.0 claims the silver medal in the storage system software category in Storage magazine's Products of the Year competition.
  • What are the benefits of using the Boot to VHD feature?

    Answer - Boot to VHD may be seen as a novelty feature, but it also has practical uses in the workplace. Brien Posey explains in this expert answer.
  • disk mirroring (RAID 1)

    Definition - Disk mirroring, also known as RAID 1, is the replication of data to two or more disks. Disk mirroring is a good choice for applications that require high performance and high availability such as transactional applications, email, and operating systems.

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: