Tip

Hyper-V 3.0's virtual Fibre Channel to open door for more workloads

In Hyper-V 3.0, expected for release after Windows Server

    Requires Free Membership to View

2012 becomes generally available in early September, Microsoft will introduce a staggering number of new features. One feature that will undoubtedly benefit administrators who have been tasked with managing storage is virtual Fibre Channel. Simply put, virtual Fibre Channel allows virtual machines (VMs) to connect directly to Fibre Channel-based storage.

Virtual Fibre Channel is important for a few reasons. For starters, the technology will make it practical to virtualize some servers that have not traditionally been considered good candidates for virtualization. In the past, if a server's workload required the server to connect directly to Fibre Channel-based storage, that workload could not easily be virtualized. (Without virtual Fibre Channel, a direct connection can be achieved via SCSI pass-through, but this approach has various inefficiencies and can undermine failover capabilities.) With Hyper-V 3.0 and virtual Fibre Channel, this will change.

Virtual Fibre Channel will also allow administrators to become more creative with their virtualization infrastructure. For example, administrators will be able to build Fibre Channel-based clusters at the guest level. The technology will even make it possible to create virtual SANs.

Virtual Fibre Channel requirements and limitations

Operating system support for virtual Fibre Channel is surprisingly flexible. Although the technology is a Hyper-V 3.0/Windows Server 2012 feature, it is backward compatible with VMs that are running Windows Server 2008 and 2008 R2. Of course, the virtualization host does have to run Hyper-V 3.0. That said, as with any new technology, there are requirements and limitations associated with the use of virtual Fibre Channel.

Most of the limitations exist at the hardware level. For example, it is not enough for your host server to be equipped with one or more Fibre Channel host bus adapters. The physical host bus adapters must support virtual Fibre Channel. This eliminates the possibility of using legacy host bus adapters.

In addition, the physical ports within the host bus adapters must be set up in a topology that supports NPIV, because Microsoft's virtual Fibre Channel implementation is built around NPIV mapping. (This limitation applies only to ports that are being used for virtual Fibre Channel -- not to other Fibre Channel ports that may exist within the server.) The host bus adapter ports that are being used for virtual Fibre Channel must be connected to an NPIV-enabled SAN. Furthermore, the SAN must be configured so that physical SAN storage is presented to the VMs as a series of logical units.

In NPIV mapping, a series of virtual NPIVs are mapped to a single N_port on a physical Fibre Channel host bus adapter. Whenever an administrator powers on a VM that is configured to use virtual Fibre Channel, Hyper-V provisions that VM with a virtual NPIV port that is mapped to a physical N_port. Whenever the VM is powered down, the virtual port is removed from the VM until the next time it is powered on.

Additional considerations

One of the most important considerations around virtual Fibre Channel is the way in which it impacts your ability to live migrate a VM to another host.

Live migration is fully supported for use with VMs that are configured to use virtual Fibre Channel, but you must ensure that each host machine is equipped with the required host bus adapter(s).

Hyper-V 3.0 actually makes it possible to live migrate VMs without losing virtual Fibre Channel connectivity in the process. This is because Hyper-V assigns two separate World Wide Names to each host bus adapter. The live-migration process works by alternating between the two World Wide Names so that the target host establishes storage connectivity before the VM is migrated. This approach not only guarantees virtual Fibre Channel connectivity from the target host, but also ensures that there will be no storage connectivity disruption during the live migration.

 

This was first published in August 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.