How should I factor data deduplication technology into my virtual machines?
The interesting thing about dedupe technology for virtual machines (VMs) is that it can play a different role in VMs than it does in other places, such as backup, for example.
Deduplication technology is commonly thought of as a method for improving overall capacity. In the case of backups, deduplication can be used to reduce backup media consumption. Similarly, performing deduplication prior to writing data to the cloud can reduce the total amount of data sent across the wire, thereby improving the overall capacity of your WAN link.
When it comes to VMs, however, deduplication can be just as much about performance as it is about capacity. Dedupe technology is important for VMs because of the commonalities that exist between virtual machines.
Let’s suppose that a particular organization had a host server running 10 VMs, and all of those were running Windows Server 2012. Because each VM is running the same operating system, there's obviously redundancy across the VMs, and deduplication can reduce their storage footprint.
At the same time, deduplication can also improve performance for those VMs. If the volume containing the VM files has been duplicated, storage blocks will be shared among most -- if not all -- of the virtual machines. If the storage blocks are cached to memory, Windows can access them much more quickly than if the storage blocks all resided solely on disk.
Caching storage blocks to memory can happen even in non-deduplicated systems. In the case of a deduplicated host server, block caching can result in a much greater overall performance gain than might be possible if the OS had to cache storage blocks individually for each VM because each cached storage block can conceivably benefit multiple virtual machines.
This was first published in August 2013