vStorage APIs for Multipathing: How the APIs work

While using vStorage APIs for Multipathing improves storage efficiency, improper configuration and compatibility problems can affect their operation. Find out how the APIs work.

With vSphere 4.0, VMware developed its various vStorage APIs to enable third-party vendors to directly integrate

their storage hardware and applications with vSphere. One of those sets of APIs, the vStorage APIs for Multipathing (VAMP), helps to intelligently control path selection from storage adapters in a host to storage devices. Multipathing allows a host to connect to a storage device over multiple paths, for redundancy and load balancing. This functionality remains unchanged in vSphere 5.

Multipathing may seem simple in concept, but it is actually quite complex, and there are a lot of factors that can affect its operation and caveats to pay attention to.

For instance, while leveraging the vStorage APIs for Multipathing can improve storage efficiency, if they’re not configured properly, efficiency might decrease instead. And not all storage devices support vStorage APIs for Multipathing (you can verify whether yours does by checking the VMware Compatibility Guide for storage devices). It’s possible that you may need to update the firmware in your storage device before you can use it. In addition, many vSphere installations work just fine without using the vStorage APIs for Multipathing. If you do decide to use the APIs, remember to test performance before and after a change is made to ensure you are benefiting from using more advanced multipathing.

Storage and virtual servers architecture

Let’s discuss how a host is connected to LUNs in a typical VMware environment and where the vStorage APIs for Multipathing come into play.

A typical vSphere host will have two storage controllers that connect to two different storage switches, each of which connects to a separate controller on a storage device, as depicted below.

Source: VMware

This configuration allows for maximum redundancy since any one component could fail and you would stay have a path available to your storage device. And because there are multiple paths available, they can be used for more than just failover; they can also be used to balance I/O between the redundant components that make up the multiple paths from the host to the storage device.

Paths are defined by the following convention: controller:target:lun:partition. An example of this would be “vmhba0:1:3:1.” The “vmhba0” portion of the path is the name/ID of a controller in a host (if a host has two controllers, they might be named “vmhba0” and “vmhba1”). The target is the ID of the storage processor in the storage device; most storage devices have two of them for redundancy. The third part of the path, LUN id, is the unique id that is assigned to each LUN configured on the storage device. Finally, the partition id is just the number assigned to a partition on a LUN and is not commonly used.  In the configuration depicted above, Host A would have four paths available to the LUN3 of the storage device: hba0:1:3, hba0:2:3, hba1:1:3 and hba1:2:3.

How the APIs work

vSphere uses a special layer in the VMkernel called the Pluggable Storage Architecture (PSA), which is a modular framework that coordinates multipathing operations. The PSA is designed as a base for storage plug-ins to be snapped into it. There are two main types of plug-ins that can connect to the PSA: VMware’s Native Multipathing Plug-in (NMP) and a third-party vendor’s Multipathing Plug-ins (MPP). The NMP, a generic plug-in module that supports any storage device that is listed in VMware’s Compatibility Guide, is essentially the management layer for the two types of sub-plug-ins that are under it: Storage Array Type Plug-ins (SATP) and Path Selection Plug-ins (PSP). These components make up the vStorage APIs for Multipathing.

SATPs monitor the health and state of each physical path and can activate inactive paths when needed. Every storage device is different, so vSphere includes an SATP for all the third-party storage devices that it supports that contains information on how to manage paths on a particular storage device. vSphere also has some non-vendor-specific, generic SATPs that can be used if a vendor does not have one for its array. SATPs contain many native generic path selection policies, such as Active/Active (A/A), Active/Passive (A/P) and Asymmetric Logical Unit Access (ALUA).

SATPs are the brawn that connects to a physical path; PSPs, meanwhile, are the brains deciding which physical path to take.

Assuming vStorage APIs are not in use, the default policies that can be used to route I/O are:

  • Most Recently Used (MRU) continues to use the same path until a failure with the path occurs. Once the failed path is restored, it continues to use the existing path and does not switch back to the path that had failed.
  • Fixed Path (FP) continues to use the same path until a failure with the path occurs. Once the failed path is restored, it switches back to the path that had failed.
  • Round Robin (RR) will alternate I/O on each path in a round-robin fashion to spread the load across multiple components.

vStorage APIs for Multipathing add intelligence on top of these default policies. SATPs are global in nature. You would use only one per storage device. PSPs, on the other hand, can be set individually on each LUN as desired. NMPs, SATPs and PSPs all work together to handle the delivery of I/O from a VM to a storage device in the following sequence.

1.       NMP talks to the PSP that is assigned to the storage device.

2.       PSP chooses a physical path to send the I/O down.

3.       NMP sends the I/O down the path that the PSP has chosen.

4.       If an I/O error occurs, NMP tells the SATP about it.

5.       SATP looks at the error and activates a new path if necessary.

6.       PSP is called to select a new path for the I/O.

Within the PSA, in addition to the VMware-supplied NMP, third-party MPPs can also be used to either replace the default NMP or run in addition to it. Third-party MPPs provide the advantage of having been developed by a vendor specifically for its storage devices. They therefore can handle path management operations more intelligently than the VMware NMP. This means more efficient load balancing, which translates to better I/O bandwidth and better failover path selection. MPPs run alongside the VMware NMP and can take complete control of path failover and load balancing operations.

The relationships among the various components of the PSA are depicted below.

Source: Eric Siebert, based on info provided by VMware

What the APIs look like in vSphere Client

Paths can be viewed and managed in vSphere via the vSphere Client by selecting the Storage Adapters view under the Configuration tab of a host. Here you can see all of your storage adapters and the storage devices that they are connected to. The Owner column shows which module owns the connection to the storage device; “NMP” indicates it’s the default VMware NMP module. Otherwise, vendor-specific MPP modules will show in the Owner column if they are available and configured.

You can right-click on a disk and select Manage Paths to view all the I/O paths and see if they are active or passive. The words “I/O” in the Status column below indicate that I/O is being sent on the path. You can also see which SATP and PSP policies are in use and change the PSP if needed.

Paths can also be managed by selecting Storage under the Configuration tab, selecting a datastore, choosing Properties and clicking the Manage Paths button.

The esxcli command, which is available in the vSphere CLI or the vSphere Management Assistant, can be used to view and manage SATP and PSP policies as well. While the PSP can be changed using the vSphere Client, to change the SATP you need to use the esxcli command. The SATP is normally chosen automatically based on the characteristics of the storage device that it is connected to. However, you can change it to a vendor-specific one if it is available. Check with your storage device vendor about its level of support for multipathing in vSphere and what you need to do to enable it. VMware has published a SAN Configuration Guide that provides information on how to set up and manage MPP.

Words of caution

As mentioned above, if multipathing is set up incorrectly, efficiency could drop. For Active/Passive arrays, a LUN can be owned by only one storage controller at a time, and path thrashing (whereby LUN ownership is ping-ponged between storage controllers) can occur, which can greatly reduce performance. Make sure you follow all the steps necessary to prepare your storage device for multipathing, and make sure that your hosts are properly configured as well. After implementation, if you don’t see any I/O gain, it’s possible something is not configured properly or that your I/O patterns or hardware configuration may not benefit from multipathing. You can also do some tweaking of vSphere to help improve multipathing; each storage vendor should have recommendations for each of its storage devices.

Eric Siebert is a VMware expert and has written a number of articles for SearchVirtualStorage.com.

This was first published in August 2011

Dig deeper on Data Storage Solutions for a Virtual Environment

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchServerVirtualization

SearchVirtualDesktop

Close