Microsoft to drop Hyper-V shared storage requirement with Windows Server 2012

In Windows Server 2012, Microsoft will abandon Hyper-V shared storage as a prerequisite for advanced functionality. Read up on this as well as other storage-related changes to Hyper-V 3.0.

This Content Component encountered an error

Windows Server 2012, which is expected to ship later this year, will offer numerous improvements both in regard to Hyper-V and to storage.

Perhaps the most important storage-related change to Hyper-V 3.0, the version that will ship with Windows Server 2012, relates to shared storage. Although prior versions of Hyper-V didn’t require shared storage, advanced functionality such as live migration and failover clustering was impossible without it (and therefore was financially out of reach for many SMBs). And implementing Hyper-V shared storage required either a SAN or an NAS system.

In Hyper-V 3.0, it will be possible to use live migration or failover clustering in conjunction with direct-attached storage (DAS), eliminating the need for a SAN or NAS system. Even so, Hyper-V 3.0 will generally yield better performance if you rely on shared storage.

Storage improvements in Windows Server 2012

Windows Server 2012 will deliver operating system-level services that SMBs can use to build their own storage platform. The most important of these services are data deduplication, iSCSI Target Server and Server for NFS, shown in the screen shot below.

Microsoft Hyper-V storage changes image 1Windows Server 2012 will offer a variety of storage options.

The iSCSI Target Server option is significant because it will allow organizations to easily turn a storage volume or virtual hard disk within a Windows server into an iSCSI target that can be used for virtual machine storage. More importantly, because the storage is being hosted on a Windows server, administrators will be able to use the same tools to manage the storage as they use to manage their Windows servers.

The Server for NFS option will allow for the creation of NFS volumes on Windows servers. This will be handy for organizations that primarily use Unix or Linux systems because it will allow data to be stored on Windows servers using the same file system that the organization is using on its UNIX/Linux servers.

Another storage-related feature worth mentioning is Multipath I/O. SANs are equipped with interconnected fabrics. This interconnection provides fault tolerance in the event of a cable or adapter failure, but in some cases it can also provide a performance boost because multiple communications paths can be used simultaneously. The Windows Server 2012 Multipath I/O feature will provide the operating system with similar performance enhancing and fault tolerance capabilities. The best part is that using Multipath I/O will not require the purchase of specialized hardware. NICs from multiple vendors will be able to be teamed together into a single multipath solution.

Yet another storage feature that should go a long way toward improving the Hyper-V experience is native deduplication. The Windows operating system will allow for native, block-level deduplication on volumes running the ReFS, or Resilient File System, which is new to Windows Server 2012. This is such a big deal because virtual machines by nature have a lot of redundancy. For example, each virtual machine might use the same operating system files, and some virtual machines might run the same applications. Deduplicating data on the underlying storage means that virtual machines will consume less physical storage, which, in turn, could make the use of solid-state storage as the sole repository for virtual hard disks on a host server more practical.

What about large organizations?

As you can see, Microsoft has laid the groundwork for Windows servers to be used as storage appliances. However, most large organizations have already invested in storage solutions. Even for companies that already have Hyper-V shared storage, Microsoft will offer at least two features that will enhance Hyper-V’s use of SAN storage.

One such feature is “virtual Fibre Channel,” enabled in the screen shot below under Add Hardware. Virtual Fibre Channel will allow virtual machines to connect directly to a storage device through Fibre Channel. Microsoft notes, however, that virtual Fibre Channel cannot be used to connect a virtual machine to a system drive. For that you will still have to use an IDE controller.

Microsoft Hyper-V storage changes image 2Windows Server 2012 will support the use of “virtual Fibre Channel.”

The other feature that will prove to be useful to large organizations is Windows Server 2012’s Offloaded Data Transfer (ODX), which admins will be able to take advantage of in Hyper-V. The idea behind this feature is that if data needs to be moved between two LUNs, it is faster to move the data at the SAN level than to copy the data to Windows and then to the destination. The ODX feature will offload data transfers between LUNs so that the entire process can occur at the hardware level.

Brien Posey is a freelance technical writer who has received Microsoft’s MVP award six times. He has served as CIO for a national chain of hospitals and health care companies, and as a network administrator for the U.S. Department of Defense at Fort Knox, Ky.

This was first published in June 2012

Dig deeper on Data Storage Solutions for a Virtual Environment

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchServerVirtualization

SearchVirtualDesktop

Close