Over the past several weeks, a number of top-tier storage companies have made announcements concerning new storage
virtualization initiatives and products -- VERITAS, IBM and EMC. EMC announced plans to port its virtualization technologies to Cisco's MDS 9000 Fibre Channel switch. IBM announced additional details on its much-anticipated storage virtualization product lineup, which will include volume management, a virtualization appliance ("SAN-in-a-can"), and a SAN file system. Finally, VERITAS unveiled its plans for a SAN volume manager to be offered both on host architectures as well as on switching platforms from both Cisco and Brocade.
To a certain extent, storage virtualization has been the Rodney Dangerfield of technologies, hyped for a number of years as a core foundation to storage management. But the appliance-based approach driven by a number of startups has minimal success in getting wide-scale adoption. And, volume and file management, the mainstay of many data centers has generally flown under the radar of much of this hype, even though it fits into the category of virtualization by logically organizing physical resources.
So what's changing? The endgame. IT organizations have been conservative over the last several years about capital purchases, with a lot of IT managers focusing on how to better maintain what is already in place. Now, the game has become focused on making sure whatever tools, hardware or other infrastructure I put in place can be cost justified, provide a return on investment and help me shift more of the operational focus to offering storage as a service or utility. And right in the middle of this (guess what?) is storage virtualization, along with server virtualization technologies (which we will not address during this column).
But now this technology has the potential to help lay the highway to this concept of the storage utility. Essentially, it enables a middleware layer that becomes part of the stepping stones to move from virtualizing the storage environment, automating it and eventually offering storage as a service or utility. The rest of this column looks at the delivery mechanisms vendors are using to provide storage virtualization as well as the key questions you need to consider when evaluating the ever-evolving virtualization choices. Virtualization will play a key role in storage provisioning, automation tools, Storage Resource Management (SRM) and policy-based/data lifecycle management.
Today there are many choices in how storage virtualization can be delivered:
- Server/host: software that resides on the host and communicates with the storage network and arrays on the fabric.
- Appliance: dedicated appliances that sit in the network and act as either in-band or out-of-band software layers that virtualize the storage behind the systems.
- Storage array: virtualization software that is part of the core infrastructure management tools that reside on the array, with agents that sit on host systems.
- Switch: a new category being driven via partnerships between storage networking vendors and storage array/storage management vendors (this is still a work in progress with products slated to arrive by 2004).
Given the numerous ways storage virtualization could be delivered, it is likely in the coming years that storage networks will be virtualized by zones -- some parts of the storage network will require virtualization faster than other parts (depending on the mission-critical nature of the application and the management complexity). And, it is likely customers will roll out the technology in a gradual fashion. Keep in mind that many of the storage virtualization tools today handle a number of extended features such as virtual copy, replication, capacity on demand, LUN masking and some levels of storage provisioning. Mileage varies here greatly, as does the definition of what's considered heterogeneous.
Each of the above methods has advantages and drawbacks. Server-based virtualiztion is easy to deploy, but not always considered enterprise-class. Appliance-based virtualization can sometimes provide a broader birds-eye view of what storage is available in the network, but can involve a more complex setup. Storage-array-based virtualization gives the best in-depth view of blocks and volumes on a specific array, but tends not to be heterogeneous in approach. Switch-based virtualization promises to make adding new hosts to the storage network and provisioning of storage easier, but there are still some fundamental questions about the performance ramifications of this approach on the storage network.
The best way to evaluate options is to ask a series of questions that will narrow the number of choices:
1. Do you have a corporate standard for storage management tools, and does the chosen storage management vendor offer virtualization tools today?
2. Is there a preferred way of virtualizing the storage environment? (In other words, does one method better fit your requirements based on deployment, ease of use and enterprise feature requirements?)
3. What features are important to your storage operations? (Volume management, data path management, snapshot, replication or other features?)
4. Is this integrated with other management tools from the same vendor or other vendors?
5. How cost effective will this tool be? Is it priced in a way that is advantageous, or will it have declining pricing advantage as you add more server, port or terabyte licensing thresholds.
6. What platforms, configurations and storage devices are supported? It is crucial you know the supported configurations before you buy.
About the author: Jamie Gruener is a SearchStorage.com expert and the primary analyst focused on the server and storage markets for the Yankee Group, an industry analyst firm in Boston, Mass. Jamie's coverage area includes storage management, storage best practices, storage systems, storage networking and server technologies. Ask him your storage management questions today.
FOR MORE INFORMATION: