The release of VMware Inc.’s vSphere 5 brings many exciting new features and enhancements to the virtualization...
platform, especially when it comes to storage. Two of the biggest new features in that area are Storage Distributed Resource Scheduler (DRS) and Profile-Driven Storage, which provide some much-needed control over storage resources.
In previous versions of vSphere, Distributed Resource Scheduler balanced VM workloads based on CPU and memory resource utilization. Storage DRS extends this capability to storage, enabling intelligent VM initial placement and load balancing based on storage I/O and capacity conditions within a cluster. Profile-Driven Storage, for its part, ensures that VMs are placed on storage tiers based on service-level agreements (SLAs), availability, performance and capabilities of the underlying storage platform. In this tip, we’ll examine both Storage DRS and the storage profile functionality in detail.
Similar to the traditional DRS feature, Storage DRS uses a new type of cluster called a data store cluster, which is a collection of data stores that are aggregated into a single unit of consumption. By controlling all of the storage resources, Storage DRS allows intelligent placement of VMs that are powered on, as well as the shifting of workloads from one storage resource to another when needed to ensure optimum performance and avoid I/O bottlenecks. What this means in simpler terms is that, similar to vMotion’s movement of VMs from host to host, VMs can now be moved from data store to data store as well; the decision to move a VM from one data store to another is made by Storage DRS, which tells Storage vMotion to make the move.
Data store clusters are created by going to the Datastores and Datastore Clusters object in the Inventory section of the vSphere Client and right-clicking on a data center and choosing the New Datastore Cluster option. This will launch a wizard that allows you to configure the cluster and define automation settings.
At the first screen, name the cluster and choose whether to enable Storage DRS or not. Then choose an automation level for Storage DRS—either manual mode, where it makes recommendations but does not act on them, or full automated mode, where VM disk files are automatically moved. Then define the runtime rules for Storage DRS, which specify how it operates. You can choose to include I/O metrics as part of Storage DRS recommendations or not. If you do not include them, only utilized space is factored in. You can also define the thresholds for both utilized space and I/O latency. Utilized space can be set from 50% to 100%; this setting dictates the minimum level of consumed space that is the threshold for action. The default is 80%; with that setting, if a data store has more than 20% free space, no action is taken. I/O latency can be set from 5 milliseconds (msec) to 50 msec; this dictates the minimum I/O throughput needed to take action. The default is 15 msec, which means at least 15 msec of I/O latency must be occurring before action is taken.
There are also advanced settings where you can define the difference that must occur between the source and destination data store for action to occur. For example, if the threshold is set to 5 and if the space used on the source data store is 82% and on the destination data store it’s 79%, Storage DRS will not make migration recommendations from the source data store to the destination data store. You can also define how often vCenter Server re-evaluates I/O workloads. The default is eight hours, which is not very frequent. The re-evaluation can be set in minutes, hours or days. Finally, there is an I/O imbalance setting that performs migrations only when the I/O load imbalance exceeds the threshold.
Once you define your Storage DRS runtime settings, you then choose which hosts to include in the cluster as well as which data stores, and then you are ready to start using Storage DRS.
One important feature for controlling Storage DRS behavior is anti-affinity rules. These are used to ensure that specific VMs or specific virtual disks do not end up on the same data store together. There are certain situations when you want to keep VMs apart—for example, if you have two heavy I/O database servers running on separate VMs, you may not want both those workloads thrashing the same data store. Another is for fault tolerance reasons: If you have two VMs of which at least one must remain running, by keeping them on separate data stores, you can ensure that if one data store fails, one of the VMs will continue to run.
While Storage DRS is a great feature, you should be careful with it and make sure you do not set it too aggressively. A Storage vMotion task is very I/O-intensive. If too many occur at the same time, performance can suffer. This issue can be at least somewhat mitigated through the use of a storage array that provides copy offloads for Storage vMotion so host resources are not consumed when they occur. And it’s best to first use Storage DRS in manual mode before you try automated mode, to monitor recommendations and get a feel for what will occur.
Profile-Driven Storage enables you to ensure that VMs are stored on devices that have specific characteristics, such as certain capacity, availability, performance and redundancy levels. For example, if you have a critical VM that requires high storage performance, you can make sure that it uses only Fibre Channel data stores instead of iSCSI data stores.
There are two components to this feature: storage capabilities and VM profile documents. Storage capabilities can be either system-defined or user-defined. System-defined capabilities are automatically populated by storage arrays that support the new vStorage APIs for Storage Awareness (VASA). Using VASA, a storage array can tell vCenter Server its capabilities, such as which features it supports, performance characteristics, redundancy, capacity, etc. vCenter Server assigns the system-defined storage capability to each data store that you create from that storage system. If a storage array does not support VASA, you can still manually define capabilities for a storage device. You can then associate user-defined capabilities to a data store. You can associate a user-defined capability with a data store that already has a system-defined capability. But a data store can have only one system-defined and only one user-defined capability at a time.Once you have storage capabilities defined, you then create VM storage profiles from among all the storage capabilities that are available. These profiles are used during provisioning, cloning and Storage vMotion to ensure that only those data stores or data store clusters that are compliant with the virtual machine storage profile are made available.
Once you have profiles created, you can then assign them to a VM by right-clicking on a VM and selecting the VM Storage Profile option and choosing Manage Profiles. You can also edit the settings of a VM, and on the Profiles tab you can assign profiles to a VM and its virtual disks (.vmdk). A VM’s utility files (.vmx, .vmsd, .nvram, .log, etc.) and its virtual disks can have separate VM storage profiles. Once profiles are enabled and assigned to VMs, you can then check whether the VM and its virtual disks are using data stores that meet the compliance that is set in the VM storage profile. In the VM Storage Profiles section of the vSphere Client, you can configure, enable and check the compliance status of VMs. For noncompliant VMs, you can choose to migrate them to a data store that will satisfy compliance.
Eric Siebert is a VMware expert and author of two books on virtualization.