Storage is the slowest and most complex host resource, and when bottlenecks occur, they can bring your virtual...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
machines (VMs) to a crawl. In a VMware environment, Storage I/O Control provides much needed control of storage I/O and should be used to ensure that the performance of your critical VMs are not affected by VMs from other hosts when there is contention for I/O resources.
Storage I/O Control was introduced in vSphere 4.1, taking storage resource controls built into vSphere to a much broader level. In vSphere 5, Storage I/O Control has been enhanced with support for NFS data stores and clusterwide I/O shares.
Prior to vSphere 4.1, storage resource controls could be set on each host at the VM level using shares that provided priority access to storage resources. While this worked OK for individual hosts, it is common for many hosts to share data stores, and since each host worked individually to control VM access to disk resources, VMs on one host could limit the amount of disk resources on other hosts.
The following example illustrates the problem:
- Host A has a number of noncritical VMs on Data Store 1, with disk shares set to Normal
- Host B runs a critical SQL Server VM that is also located on Data Store 1, with disk shares set to High
- A noncritical VM on Host A starts generating intense disk I/O due to a job that was kicked off; since Host A has no resource contention, the VM is given all the storage I/O resources it needs
- Data Store 1 starts experiencing a lot of demand for I/O resources from the VM on Host A
- Storage performance for the critical SQL VM on Host B starts to suffer as a result
How Storage I/O Control works
Storage I/O Control solves this problem by enforcing storage resource controls at the data store level so all hosts and VMs in a cluster accessing a data store are taken into account when prioritizing VM access to storage resources. Therefore, a VM with Low or Normal shares will be throttled if higher-priority VMs on other hosts need more storage resources. Storage I/O Control can be enabled on each data store and, once enabled, uses a congestion threshold that measures latency in the storage subsystem. Once the threshold is reached, Storage I/O Control begins enforcing storage priorities on each host accessing the data store to ensure VMs with higher priority have the resources they need.
The congestion threshold is set in milliseconds (msec) on each data store it is enabled on. The default congestion threshold is set to 30 msec and can be configured from 10 msec to 100 msec. In most cases the default value works just fine. Before changing it, you need to understand the effects of the change. Setting the value higher results in higher aggregate data store throughput but weaker VM I/O control. A lower value will result in stronger virtual machine I/O control as share controls are enforced more often. When setting thresholds, for SIOC to function correctly, it’s important to note that thresholds must be set to the same value for all data stores that share the same spindles on an array.
Storage I/O Control also adds another I/O control for VMs: A new setting allows you to set the maximum number of IOPS that a VM can generate. This limit operates independently of shares and basically sets a hard speed limit on a VM. One important thing to note is that the limit applies even if there is no contention and plenty of storage resources available; therefore, care must be taken when using this setting.
How to enable and configure Storage I/O Control
SIOC can be enabled and configured using the vSphere client by choosing the Datastores inventory object from the Home page.
Next, choose the data store that you want to enable SIOC on in the left pane and click the Configuration tab in the right pane. Then click the Properties link.
Put a checkmark next to “Enabled” under the Storage I/O Control area to enable it.
Once enabled, you can click the Advanced button to change the threshold from the default value if desired.
Now that SIOC is enabled, you can adjust the Share and IOPS settings on VMs as needed. By default, all VMs have Normal shares and unlimited IOPS so they all have equal access to resources. To change the default settings, select a VM and edit its settings. Then on the Resources tab, select Disk and you can change the Shares and IOPS values.
Once you have SIOC enabled, you can monitor it by selecting a data store and choosing the Performance tab and then selecting Performance from the View drop-down menu. This allows you to see average latency and aggregated IOPS for the data store as well as for individual VMs. Note that latency is displayed in microseconds; you can convert this to milliseconds by multiplying the value by 0.001 (for example, 8,000 microseconds equals 8 milliseconds).
Here are some additional points to consider.
- Storage I/O Control is not enabled by default in vSphere 4.1 or 5.
- In vSphere 4.1, Storage I/O Control works only on block-based Virtual Machine File Systems (VMFS) data stores (iSCSI and Fibre Channel); NFS is not supported. In vSphere 5, however, SIOC has been enhanced with support for NFS data stores.
- SIOC is included only in the Enterprise Plus edition of vSphere.
- Data stores that are SIOC-enabled must be managed by a single vCenter Server system.
- vCenter Server and all hosts connected to the data store must be running vSphere 4.1 or greater.
- In both vSphere 4.1 and 5, Raw Device Mappings (RDMs) are not supported.
- SIOC does not support data store s with multiple extents.
- Before using SIOC on data stores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O Control.
Eric Siebert is a VMware expert and has written a number of articles for SearchVirtualStorage.com.