What you will learn from this tip: How new methods of transferring data at the file and block level are offering simpler and more flexible SAN management.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Storage administrators who've been waiting for virtualization technology that minimizes the complexity of the SAN might not have to wait very much longer.
A recent report, authored by David Freund of Nashua, N.H.-based analyst firm Illuminata, Inc. and expected to be the first in a series, focuses on the IBM TotalStorage SAN File System (SFS). SFS is not new, but version 2.1, rolled out at the end of June, supports SAN devices not only from IBM, but from EMC Corp., Hewlett Packard Co. and Hitachi Data Systems. IBM says the software is designed to avoid limitations on the amount of storage that can be supported by the SAN, and should allow customers to better leverage their existing hardware through software virtualization.
The big advantages of SFS, according to Freund's report, include its lack of "management complexity or rigid homogeneity normally associated with clusters." SFS also employs separate, dedicated servers to keep track of data, access rights, etc. The result is scalability and resilience without burdening SFS clients.
Freund says there are a number of other promising approaches that aim to accomplish much the same thing, notably from Veritas Software Corp. In addition, there are a several approaches based on established cluster file systems: "Some are living, some are dying and some -- like Digital-Compaq-HP OpenVMS -- are on life support," he adds.
Freund says IBM is a rarity in that they are providing both block- and file-level storage access.
In the long run, says Freund, IT may need to rethink its approach to storing information. "If you go back to the 1950s, the file was the earliest form of virtualization and its purpose was to keep someone from accidentally overwriting data," he says. "Today we are still doing that."
Arun Taneja, principal analyst with the Taneja Group, says he is optimistic about the new methods (such as SFS) of transfering data at the file level, even though, he points out, it is still very early in the game. "I do think this sort of thing will become more of the norm as we move forward," he says. However, he adds, "How well it can be exploited won't be clear until we have large scale deployment."
For more information:
About the author: Alan Earls is a freelance writer in Franklin, Mass.