This Content Component encountered an error

Storage virtualization-Data Storage Management

Storage virtualization best practices <<previous|next>>

Virtualization Strategy

Is storage virtualization here at last?

By  Alan Earls

SearchVirtualStorage.com

What you will learn from this tip: How new methods of transferring data at the file and block level are offering simpler and more flexible SAN management.

 


Storage administrators who've been waiting for virtualization technology that minimizes the complexity of the SAN might not have to wait very much longer.

A recent report, authored by David Freund of Nashua, N.H.-based analyst firm Illuminata, Inc. and expected to be the first in a series, focuses on the IBM TotalStorage SAN File System (SFS). SFS is not new, but version 2.1, rolled out at the end of June, supports SAN devices not only from IBM, but from EMC Corp., Hewlett Packard Co. and Hitachi Data Systems. IBM says the software is designed to avoid limitations on the amount of storage that can be supported by the SAN, and should allow customers to better leverage their existing hardware through software virtualization.

The big advantages of SFS, according to Freund's report, include its lack of "management complexity or rigid homogeneity normally associated with clusters." SFS also employs separate, dedicated servers to keep track of data, access rights, etc. The result is scalability and resilience without burdening SFS clients.

Freund says there are a number of other promising approaches that aim to accomplish much the same thing, notably from Veritas Software Corp. In addition, there are a several approaches based on established cluster file systems: "Some are living, some are dying and some -- like Digital-Compaq-HP OpenVMS -- are on life support," he adds.

Freund says IBM is a rarity in that they are providing both block- and file-level storage access.

In the long run, says Freund, IT may need to rethink its approach to storing information. "If you go back to the 1950s, the file was the earliest form of virtualization and its purpose was to keep someone from accidentally overwriting data," he says. "Today we are still doing that."

Arun Taneja, principal analyst with the Taneja Group, says he is optimistic about the new methods (such as SFS) of transfering data at the file level, even though, he points out, it is still very early in the game. "I do think this sort of thing will become more of the norm as we move forward," he says. However, he adds, "How well it can be exploited won't be clear until we have large scale deployment."

For more information:

Tip: Four trends changing virtualization

Advice: Why and how to virtualize inside a switch

Tip: Three kinds of storage virtualization

About the author: Alan Earls is a freelance writer in Franklin, Mass. 

05 Oct 2004

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.