All VDI infrastructures share common I/O characteristics that are far different from those of traditional virtualized...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
servers. With virtual desktops, the guest operating systems, the applications and in some cases, the user data are all the same. You have tens, hundreds or even thousands of virtual machines all doing something very similar. So, what if they all powered on at the same time, creating a "boot storm"? Or what if an antivirus scan kicked off across all desktops at the same time? Or what about a Windows update running across all virtual machines at the same time? While it's fairly common for these types of events to occur on physical desktops without any issue or complaints, you can imagine the havoc these events can wreak in a large virtual desktop environment without the I/O capacity to handle them.
When sizing VDI environments, the tossup is always whether you should size storage I/O capacity (measured in I/O operations per second, or IOPS) for the average utilization of the desktops during an average day or for the maximum I/O utilization possible.
Most VDI and storage architects end up sizing the storage I/O capacity somewhere between the maximum and the typical usage -- resulting in money spent on storage performance for short-lived (though, in the case of boot storms, regular) events.
VMware has been pushing the envelope on what can be accomplished by its software, offering features traditionally delivered by hardware or dedicated third-party software. VMware View Storage Accelerator in VMware View 5.1 is one such feature. It solves the storage I/O sizing problem by using software caching and deduplication to attempt to avoid I/O peaks caused by simultaneous virtual desktop events.
By using View Storage Accelerator, you should be able to size your VDI storage I/O capacity based on typical daily I/O utilization instead of peak utilization (or instead of a value between average and peak). This brings the following benefits:
- Cost savings. You no longer have to size VDI storage based on maximum I/O.
- Improved performance for end users. Boot times for VMs are faster and application performance is better.
- Storage performance improvements. In many cases, storage arrays are shared among many hosts, so by reducing I/O utilization of the VDI infrastructure, you improve performance for other servers and apps using that storage.
- Network bandwidth reduction. With many VDI infrastructures using iSCSI and NFS, the storage traffic is traversing the network (which is shared by many applications). By reducing I/O throughput, you improve performance for other applications.
According to VMware, by enabling View Storage Accelerator, you'll see roughly an 80% reduction in peak IOPS usage, a 45% reduction in average IOPS usage, a 65% reduction in peak throughput and a 25% reduction in average throughput. However, it's worth noting that these claims have not been validated by an independent body. Not only does enabling View Storage Accelerator help during those peak events, but it also helps reduce the average I/O utilization, since this virtual machine (VM) disk deduplication is always working to reduce I/O requests to the storage.
How does View Storage Accelerator work?
View Storage Accelerator isn't actually new technology. It's essentially an updated version of a feature called "content-based read cache." It works by creating a digest file for every Virtual Machine Disk (VMDK) as well as a global cache. The per-VMDK digest file maps the disk block number to a hash value, and the global cache maps the hash value to actual data. The global cache is a reserved area of memory on the ESXi hosts. It is an in-memory dedupe cache that caches data based on the content hash of a disk block. When the VM issues a read request, the digest file is used to get the hash value for a block, and then the global cache is consulted to see if the block is in the cache. Metadata for the digest file is maintained in memory. If the block is found in cache, it's fetched from there; if not, the data has to be retrieved from disk.
Because data being written will invalidate sections of the digest file, it is necessary to periodically recompute the digest (which can be done on a schedule).
To turn on View Storage Accelerator, you enable a feature called "host caching" on each View connection server, and then enable View Storage Accelerator in the advanced storage options when you create a new virtual desktop pool in View.
Benefits and drawbacks of View Storage Accelerator
So what's the good and the bad of using View Storage Accelerator? On the plus side, View Storage Accelerator appears to tremendously reduce the I/O utilization peaks and significantly improve the general I/O performance of all View virtual desktops. On the flip side, you need View 5.1 to use the feature, and it works with VMware View on vSphere but not with other VDI platforms on vSphere, though perhaps that will change in the future.
David Davis is the author of the best-selling VMware vSphere video training library from TrainSignal.