What data storage virtualization looks like today
A comprehensive collection of articles, videos and more, hand-picked by our editors
What you'll learn: When implementing new technology, such as software-defined storage, users must consider access points, application programming interfaces and initial formatting to achieve the best performance and capacity results.
When implemented properly, software-defined storage establishes a hardware-independent and workload-agnostic storage application layer between applications and physical storage resources. As with any technology, there are right and wrong ways to implement this software-defined storage abstraction layer.
One approach is to establish the storage virtualization layer by working with the application programming interfaces (APIs) of storage hardware, leveraging the "hooks" hardware vendors provide to their on-board, controller-based software for constructing volumes and associating volumes with services provided on the array as value-added software. The problem with an approach like this is the cost of keeping pace with changes in the underlying kit of multiple vendors is high, which is reflected in the software cost. If a hardware vendor makes a change in its firmware or software, a software-defined storage vendor must "catch up" to the changes, potentially inconveniencing consumers in the process.
Similarly, if new technology appears in the market, consumers may not be able to leverage it until a storage hypervisor vendor adds it to its list of supported products. Support issues may result from a hardware vendor dropping out of the storage business or being acquired by a hardware vendor that does not share its APIs with the storage hypervisor vendor. Bottom line: Wrangling multiple API connections to many storage platforms can be compared to herding cats, making this strategy foolhardy.
You could leverage the mount points of storage gear as the point of virtualization. Rather than connecting individually to each hardware platform -- thereby becoming beholden to storage vendors for access to their APIs -- storage pros can leverage the connections vendors must make to a market share-leading server operating system (everyone needs to make their kit compatible with the Microsoft Windows Server OS). Virtualizing storage via mount points is just as effective as virtualizing storage at storage hardware APIs -- and it's much less prone to interruption.
Once the connections to the physical infrastructure are established, most storage virtualization products require the virtual controller to take control of the capacity exposed by the mount points. As much as a volume may be formatted with a file system, storage virtualization software products typically require a process by which they take over the capacity exposed via physical storage mounts. This may take some time, requiring the writing of 0s to each bit location on each volume or drive exposed by the infrastructure. The result is a pool of storage that can be regulated efficiently and parsed out as virtual volumes with associated data protection services and performance characteristics. Pools can be established of such volumes, providing the foundation for tiered storage.
Formatting the virtualized or software-defined storage environment the first time may take a while. And it will usually require migrating data off each array that is to be virtualized and pooled, then back onto virtual volumes. Done incrementally, one application at a time or one business process at a time, you can usually accomplish the task in a methodical and reliable way. Vendors usually provide wizards to aid in the process.
Be sure to look for software-defined storage products that are not linked exclusively (or even mainly) to a particular vendor's hardware or server virtualization software. In the business of software-defined infrastructure, agnosticism is highly prized and a matter of architectural freedom and cost containment. You should consider a product that supports implementation, both as a centralized server and as a federated resource manager. The centralized server, supporting clustered failover with one or more peers, will enable ease of management and control even in a complex SAN infrastructure. The federated deployment capability will help to satisfy requirements where virtualized storage services need to be delivered close to the application workload, as is the case with most server-side solid-state deployment models. The best software offering will support both centralized and federated implementations, with centralized configuration management tools across multiple software deployments.
Start small and expand as you gain confidence. The good news is that a properly designed software-defined storage implementation should see an application performance improvement of two to four times, simply as a function of solid-state I/O queuing that occurs ahead of the spinning rust infrastructure. This is not magic -- it is the same thing that happens when caching memory is placed ahead of any network- or fabric-attached storage device (for example, the Performance Acceleration Module or flash controllers on NetApp filers).
In addition to performance improvements, you can entrust data protection and capacity management services to the software-defined storage layer rather than purchasing expensive annual contracts for value-added software on individual arrays. At the end of the day, this may more than pay for your investment in software-defined storage technology.