Storage virtualization-Data Storage Management

Storage virtualization cost considerations <<previous|next>>

Virtualization Strategy

New wave of virtualization

New wave of virtualization

By  Marc Staimer

SearchVirtualStorage.com

This article first appeared in "Storage" magazine in their February issue. For more articles of this type, please visit www.storagemagazine.com

What you will learn from this tip: Why vendors and users alike are embracing second-wave storage virtualization products.

 


A new generation of virtualization products, some call it the second wave, are radically reshaping storage infrastructure design.

As storage environments become increasingly complex and overwhelming, many organizations are being pushed to the breaking point by the sheer volume of data to be stored and managed, as well as by the increased number of regulations on how data is to be stored, retrieved and protected. Traditional ways of managing storage are proving to be either too expensive or inadequate for the job.

Second-wave storage virtualization products address the cost and complexity related to six significant problems, and can usually be cost justified based on their ability to solve one or more of these problems:

  1. Managing the volume managers of multiple homogeneous or heterogeneous servers.
  2. Ongoing storage acquisition.
  3. Provisioning multiple homogenous or heterogeneous storage arrays.
  4. Data protection for multiple homogenous or heterogeneous storage arrays.
  5. Non-disruptive or minimally disruptive data migration.
  6. Providing a flexible foundation for information lifecycle management (ILM).

There are three reasons why the second wave of virtualization products is creating so much excitement. First, all of the tier-1 storage vendors (EMC Corp., Hitachi Data Systems and IBM Corp.) support the idea of virtualization and are providing products for the second wave. This is in direct contrast to their negativity toward first-wave products. The second impetus for the new wave comes from a new enabling block storage virtualization processing technology called Split Path Acceleration of Independent Data Streams (SPAID). SPAID eliminates most storage network block virtualization performance and scalability issues by splitting the control path (slow path) from the data path (fast path).

The third reason for the new wave is that this time, storage network block virtualization isn't an end unto itself. The primary lesson learned from first-wave products is that block storage virtualization is simply an enabling technology. It must be leveraged by storage apps to provide user value. That lesson has been assimilated and the second wave is focused around complete solutions that solve real, urgent user problems.

Tier-1 storage vendors change their tune

IBM was the first of the tier-1 storage vendors to offer a second-wave product with its SAN Volume Controller (SVC), previously code-named "Loadstone." SVC uses an in-band storage network block virtualization approach. It bundles volume management with local and remote mirroring, replication and snapshot in a Lintel (IBM xSeries) storage area network (SAN) appliance. It's also available as a blade for Cisco MDS director-class SAN switches.

SVC doesn't leverage the new SPAID architectures. Nor does it address concerns with previous in-band products, like performance, scalability and reliability. So how does SVC differ from earlier products from DataCore, FalconStor and StorageApps? In a word: emphasis. SVC isn't sold or positioned as a virtualization engine. Instead, it's sold as a SAN-based volume manager and data protection storage appliance. IBM makes it clear this product is primarily a small- to medium-sized business (SMB) to small- to medium-sized enterprise (SME) product with a heavy emphasis on the "M." Interestingly, FalconStor and DataCore products are being similarly positioned.

SVC won't be IBM's only offering in the second wave. With the recent release of TotalStorage DS8000 (see IBM's new arrays), IBM is using its Power 5 chip with Hypervisor. Hypervisor allows IBM to virtualize the Power5 into multiple copies of its OS or another OS, which will allow IBM to run SVC directly on its storage array in the future. This should address many of the concerns about in-band performance, scalability and reliability, and allow the array to scale beyond its own back end. IBM is also rumored to be working on an out-of-band, second-wave offering in partnership with Cisco and Incipient Inc., Waltham, Mass.

EMC's second-wave approach addresses performance, scalability and reliability by using a modified out-of-band technology in its yet-unreleased Storage Router product. Storage Router is designed to eliminate out-of-band, server-based agents by moving them into an intelligent switch (Brocade, Cisco or McData). This approach leverages SPAID architectures that split the writes at the switch. Storage Router also takes advantage of EMC's proven software for local and remote mirroring, replication, snapshot and data migration. Storage Router puts this highly regarded software into an appliance within the storage network fabric. The initial release of Storage Router is scheduled for the second quarter of 2005, but it probably won't have all of the planned functionality until a later release.

Hitachi's second-wave, storage network block virtualization offering is an optional feature of its high-end storage array, the TagmaStore Universal Storage Platform (see HDS reinvents high-end arrays). The virtualization is embedded into TagmaStore's controller architecture and is extended to other external storage systems (from Hitachi and other vendors) by connecting to them over Fibre Channel. The external storage systems see TagmaStore as just another server. TagmaStore assigns the external storage (LUNs) to its own host storage domain and logical address space. The server applications are connected directly to a cache image in TagmaStore.

Once external storage is virtualized within the TagmaStore array, additional TagmaStore storage capabilities can be utilized with that storage (with an ensuing release planned for the first half of 2005). Capabilities include high-speed global cache, ShadowImage In-System Replication, TrueCopy, remote replication, volume migration, Universal Replicator and Data Retention Utility.

TagmaStore virtualization doesn't require any appliances, intelligent switches or switch-based application blades. It leverages the powerful TagmaStore controller architecture to provide the performance, scalability and reliability required. In pragmatic terms, TagmaStore relies on faster and more processing, plus more cache to overcome the limitations of in-band block virtualization. Whereas EMC is using a modified out-of-band approach to eliminate out-of-band limitations, Hitachi is going with a modified in-band approach to do the same thing.

Read the rest of this tip.

For more information:

Tip: Four steps for evaluating storage virtualization products

Tip: Is storage virtualization here at last?

Tip: Four trends changing virtualization

 


About the author: Marc Staimer is the president of Dragon Slayer Consulting.

08 Mar 2005

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.