Tip

vSphere storage functionality gets big boost in vSphere 5

With vSphere 4’s vStorage APIs, released in 2009, VMware made strides toward addressing

    Requires Free Membership to View

the way its platform interacted with storage resources. But the company for the most part paid scant attention to storage management from within vCenter Server. vSphere 5, which VMware plans to release in the third quarter, is set to change that scenario. The latest version contains many improvements—both big and small—that make it an exciting release for storage. In this tip we will survey all the new vSphere storage enhancements.

Before we drill down into the new functionality, though, it’s important to note that because some of these new vSphere storage features rely on specific functionality built into storage arrays, you will need to make sure that your storage array supports them before you can use them. Typically, most storage vendors do not provide immediate support for new vSphere features and APIs across all their storage models, so be sure and check with your vendor to find out when they will support it.

VMware realizes how critical storage is to vSphere and has worked hard to provide more seamless management, enhanced capabilities and maximum performance around storage. The new storage features in vSphere 5 help further tighten the bond between vSphere and its critical storage devices, making this release an attractive upgrade.

Storage DRS

In what is perhaps the most notable storage improvement in vSphere 5, VMware has expanded its Distributed Resource Scheduler (DRS) to include storage. In vSphere 4, when DRS tries to balance VM workloads across hosts, it takes only CPU and memory usage into account and ignores storage resource usage. Storage I/O Control allows you to prioritize and limit I/O on data stores, but it doesn’t allow you to redistribute it. Storage DRS in vSphere 5 fixes that limitation by selecting the best placement for your VM based on available disk space and current I/O load. Besides initial placement of VMs, it will also provide load balancing between data stores using Storage vMotion based on storage space utilization, I/O metrics and latency. Anti-affinity rules can also be created so certain virtual disks do not share the same data stores and are separated from one another. Data store clusters (also known as storage pods) are used to aggregate multiple storage resources so Storage DRS can manage the storage resources at a cluster level comparable to how DRS manages policies as well as compute resources in a cluster.

Storage Profiles

vSphere 5’s Storage Profiles enable virtual machine storage provisioning to be independent of specific storage resources available in an environment. You can define virtual machine placement rules in terms of storage characteristics and then monitor a VM’s storage placement based on these rules. The new vSphere Storage Profiles would ensure that a particular VM remains on a class of storage that meets its performance requirements. If a VM gets provisioned on a class of storage that doesn't meet the requirements, it becomes non-compliant and its performance can suffer.

vStorage APIs for Storage Awareness

Through a new set of APIs, the vStorage APIs for Storage Awareness (VASA), vSphere 5 is aware of a data store’s class of storage. The APIs enable vSphere to read the performance characteristics of a storage device so it can determine if a VM is compliant with a Storage Profile. The vStorage APIs for Storage Awareness also make it much easier to select the appropriate disk for virtual machine placement. This is useful when certain storage array capabilities or characteristics would be beneficial for a certain VM. The vStorage APIs for Storage Awareness allow storage arrays to integrate with vCenter Server for management functionality via server-side plug-ins, and they give a vSphere administrator more in-depth knowledge of the topology, capabilities and state of the physical storage devices available to the cluster. And, in addition to the functionality it lends to Storage Profiles, these APIs are a key enabler for vSphere Storage DRS, by providing array information that allows Storage DRS to work optimally with storage arrays.

VMFS5 LUN size limit

The new version of VMware’s proprietary Virtual Machine File System, Version 5 (in keeping with vSphere 5), includes performance improvements, but the biggest change is in scalability: The 2 TB LUN limit in vSphere 4 has finally been increased, to 16 TB. In vSphere 4, there is an option to choose from among 1 MB, 2 MB, 4 MB and 8 MB block sizes when creating a VMFS data store. Each block size dictates the size limit for a single virtual disk: 1 MB equals 256 GB, 2 MB equals 512 GB, 4 MB equals 1 TB, and 8 MB equals 2 TB. The default block size in vSphere 4 is 1 MB, and once set it cannot be changed without deleting a VMFS data store and re-creating it. This caused problems as many people used the default block size and later learned that they couldn’t create virtual disks greater than 256 GB. In vSphere 5, the block size choice goes away. There is only a single block size (1 MB) that can be used on VMFS volumes, supporting virtual disks up to a limit of 2 TB. While the 2 TB LUN size in vSphere 5 has been increased, the single virtual disk size limit of 2 TB has not increased.

For those upgrading to vSphere 5, porting existing data stores from VMFS3 to VMFS5 is seamless and non-destructive. But when you upgrade the volume to VMFS5, it preserves the existing block size. If you want to get it to 1 MB, you have to delete the volume and re-create it. If you’re already at 8 MB, there probably is not much advantage to doing this. When it comes to LUN size, however, you can grow your LUNs larger than 2 TB after upgrading to VMFS5 without any problems.

iSCSI UI support

In vSphere 5, VMware improved the user interface that is used in the vSphere Client to configure both hardware and software iSCSI adapters. In previous versions, to completely configure iSCSI support you had to visit multiple areas in the client, which made for a complicated and confusing process. In vSphere 5 you can configure dependent hardware iSCSI and software iSCSI adapters along with the network configurations and port binding in a single dialog box. And vSphere 5 has full SDK access to allow iSCSI configuration via scripting.

Storage I/O Control NFS support

Many new storage features, upon release in vSphere, initially support only block-based storage devices. Storage I/O Control (SIOC) is an example of this convention. SIOC enables you to prioritize and limit I/O on data stores, but pre-Version 5, it did not support NFS data stores. In vSphere 5, SIOC has been extended to provide cluster-wide I/O shares and limits for NFS data stores.

vStorage APIs for Array Integration: Thin provisioning

The vStorage APIs for Array Integration (VAAI), introduced in vSphere 4, include the ability to offload several storage-intensive functions from vSphere to a storage array. In vSphere 5, VAAI has been enhanced to allow storage arrays that use thin provisioning to reclaim blocks when a virtual disk is deleted. Normally, when a virtual disk is deleted, the blocks still contain data, and the storage array is not aware that they are deleted blocks. This new capability allows vSphere to inform the storage array about the deleted blocks so it can reclaim the space and maximize space efficiency.

Swap to SSD

Using solid-state drives as a storage tier is increasing in popularity. vSphere 5 provides new forms of SSD handling and optimization. For instance, the VMkernel will automatically recognize and tag SSD devices that are local to VMware ESXi or are available on shared storage devices. In addition, the VMkernel scheduler has been modified to allow VM swap files to extend to local or network SSD devices, which minimizes the performance impact of using memory overcommitment. ESXi can auto-detect SSD drives on certain supported storage arrays; you can use the Storage Array Type Plug-ins (SATP) rules, which are part of the Pluggable Storage Architecture (PSA) framework, to tag devices that cannot be auto-detected.

Storage vMotion enhancements: Snapshots and mirror mode

In vSphere 4, if a VM had active snapshots it could not b e moved to another data store using Storage vMotion. In vSphere 5, that limitation has been removed. This is important because while Storage vMotion operations were not common in vSphere 4, they are in vSphere 5 since the new Storage DRS feature will be moving VMs between data stores on a regular basis as storage I/O loads are redistributed.

Another Storage vMotion improvement relates to how changes are tracked. In vSphere 4, VMware enhanced Storage vMotion by employing the Changed Block Tracking (CBT) feature to track block changes while a Storage vMotion occurs instead of relying on VM snapshots. Once the copy process completed, the changed blocks were then copied to the destination disk. In vSphere 5 the company improved Storage vMotion even further, by abandoning CBT in favor of a new mirror mode. Instead of keeping track of blocks that change while a Storage vMotion is in process and then copying them once it completes, Storage vMotion now performs a mirrored write, so any writes that occur during the Storage vMotion process are written to both the source and destination at the same time. To ensure that the source and destination disks stay in sync, they have to acknowledge each write that occurs. In vSphere 4, VMs with a lot of CBT-related storage I/O had the potential of not keeping pace, in which case the Storage vMotion process would eventually fail. This new process is much more efficient and much faster and avoids the potential for failure.

Eric Siebert is a VMware expert and author of two books on virtualization.

This was first published in July 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.