This is the third of a four-part series on block-based storage virtualization technology. In the first story, we explain the reasons why IT departments would want to implement storage virtualization. In the second story, we explain how to implement it at the server level. In this story, we explain how storage virtualization software is implemented in the storage array. In the last story, we explain how it's implemented at the network appliance level.
Virtualization within storage arrays is an evolution of server-based storage virtualization. While it’s required for the operation of modular, scale-out architectures, virtualization has also become an essential feature for the efficient provisioning and management of most array-based storage implementations, from the enterprise on down.
Storage virtualization was at one point a major selling feature of enterprise disk arrays, providing the ability to easily provision, expand and reallocate storage. Storage consolidation was one of the primary value propositions put forth by the first SAN vendors, and it was enabled to a large extent by the virtualization software these systems included.
The technology has become a common feature on most arrays, either as a charged-for option or, increasingly, as a standard part of their storage management software suite. Indeed, it would be hard to imagine using any kind of shared, consolidated storage infrastructure without a good virtualization capability.
Storage virtualization is also an essential part of scale-out storage and similar grid-based, clustered architectures. These modular products typically include a controller within each node, allowing them to scale processing power as they expand storage capacity. They rely on virtualization to present these physically separate storage nodes as a unified pool.
Array-level storage virtualization software is also required for the automated storage tiering that many currently available arrays are featuring. These products leverage strong front-end virtualization to parse and scatter data blocks across the various tiers they have set up and move it around based upon policies established by the user.
A majority of storage arrays include some kind of virtualization technology, usually as a primary feature for storage provisioning but also as part of the storage services they offer. When considering virtualization as a tool to enable multivendor (heterogeneous) storage consolidation, array-based systems aren’t as common as network-based systems. That said, there are certainly heterogeneous array-based virtualization solutions available. Two of these are from Hitachi Data Systems and NetApp.
Hitachi was the first major disk array vendor to break from the industry’s homogeneous tradition with its Universal Storage Platform (USP) several years ago. HDS put a powerful virtualization engine into the storage controller and allowed companies in the Tier 1 space to consolidate the disparate arrays that were accumulating in their data centers, although these external arrays were connected as essentially “dumb” storage. Hitachi’s VSP systems support up to 247 PB of storage and includes the kinds of features one would expect from a Tier 1 solution, including thin provisioning of internal and externally attached storage.
NetApp V-Series Open Storage Controller
NetApp’s V-Series Open Storage Controller is essentially a NetApp storage controller that’s been configured to support third-party storage arrays. This in-band solution connects to a Fibre Channel SAN on the back end, consolidating storage volumes from existing LUNs that are available. LUNs from existing storage arrays can be left as provisioned to existing hosts if needed. The Open Storage Controller pools them into NetApp LUNs for block or file provisioning as would a regular NetApp filer, providing NetApp storage services such as snapshots, replication, etc.
When to use and how to choose
If the driver for virtualization is consolidating existing arrays and overall provisioning and management, an array-based storage virtualization software solution may be a good option. Obviously, you would need to be in the market for a storage system. Prospective companies would probably also need to be in the upper echelon of the market, and it wouldn’t hurt if they already had a relationship with a vendor offering one of the products in this space.
If the reason to look at virtualization is to support something like off-site replication, data migration or a storage tiering project, a network-based virtualization solution can be more flexible and probably more affordable for companies closer to the midmarket. We’ll cover this in our next tip.
Eric Slack is a senior analyst with Storage Switzerland.