Brian Madden discusses VDI IOPS, SSD, storageless VDIDate: Nov 21, 2012
Storage plays a big part in any VDI implementation. For a successful deployment, there are big storage decisions to be made. In this Tech Talk, VDI expert Brian Madden addresses common issues regarding storage for desktop virtualization, including how to plan VDI IOPS for each desktop, how solid-state drives (SSDs) can figure into a VDI architecture and what storageless VDI is. Watch the video or read the transcript below.
How do you calculate the number of VDI IOPS you should plan for each desktop?
Brian Madden: Take the biggest number you can think of and double that. Next question [laughs]. IOPS is tough. [With] traditional laptops, you go to Best Buy and buy a $400 laptop with an old-fashioned, super-cheap magnetic spinning disk. That thing has 50 to 80 IOPS. … A lot of these vendors who are selling the software and trying to make sizing reports [will advise planning] for 7 or 10 IOPS per user.
Of course they are going to say that because IOPS are expensive, so the fewer IOPS you have, the cheaper your desktop virtualization solution is, and that helps them sell more product. They say things like, "Users don't really need more than 10 IOPS averaged over a day," which is maybe true, but the problem is if you look at the person who's got a laptop with 50 IOPS. Yes, they may be averaging 10 IOPS all day, but it's spiking. They're maxing out all 50 for a second while they're loading an application; then they're using none while they're sitting there picking their nose. Then they click Save, and it spikes it again, and then they use none. If you just took all those spikes of IOPS and flattened it out -- from spikes of 50, flattening it down to 10 -- it is going to make everything take way longer.
More on VDI IOPS
Using IOPS for VDI capacity planning
The storage-for-VDI equation
University solves VDI write IOPS problem
A lot of laptops and desktops are using SSDs. SSD has way more IOPS than magnetic [disk]. Think about it: Your cheap $400 Best Buy laptop supports 50 to 80 IOPS with a magnetic disk, and you're thinking, "No, that's not enough. I want SSD. It's going to be so fast and awesome." Then you get SSD and it is fast and awesome. If 50 to 80 was not enough for you, why are you doing your calculation with 10 [IOPS] as your baseline?
There are storage solutions out there that can do 200, 300 or 400 IOPS per desktop. That's what I would love to see. I want the IOPS to be as [high] as [possible]. I couldn't possibly imagine less than 50. That is a loaded question. It depends on the scenario, but I want hundreds.
Where in the desktop virtualization environment should storage administrators implement solid-state to be most effective?
Madden: First of all, solid-state means a lot of things; there are SSDs, there is [dynamic RAM] DRAM and there is caching. Let me turn the question around a little bit. … We want our users to be very fast. If I say, "I want hundreds of IOPS per user," no one can afford that (probably except for spies in the government). … If we were just buying stacks of SSDs or hard drives, we can't afford 200 per user, so we have to start to make some economies, some efficiencies of scale.
A lot of the storage vendors now have storage products -- some are hardware-based and some are software -- where they have single, individual blocks. If multiple virtual machines (multiple desktops) are using the same blocks of data, they can consolidate that into one single block on the storage system.
When Windows boots up, even if you have a desktop and I have a desktop, you have all your programs installed and I have all my programs installed; you have pictures of kittens and I've got pictures of dinosaurs; they look really different. It's still the same version of Windows, the same kernel and the same registry; there are a lot of similar blocks between both of our machines. If our storage system is smart, it can say, "This block, I can use for both of these two users and probably 10 of our co-workers, too." If that one block is shared among all of our virtual machines, we can put that block in cache. Either we can put it in a faster area of cache, DRAM, SSD or something like that. And if we are serving this data block out of memory, for example, our IOPS are … thousands or millions, because it is all memory.
What's interesting is SSD by itself isn't really going to buy us anything. It gets faster, but … SSD is complicated. What I want to see is something where our storage system has some intelligence so it can start to consolidate blocks that are the same amongst all users. Realize that even though we have 100 users, each with 20 GB hard drive images, if we can cache 2 GB of that, that gives us 90% of the blocks that all of our users need. … And then we throw that in cache or in SSD, and it's there. The actual technology … I don't care about that, it's SSD, quantum spin mechanics, it's DRAM, whatever. … I just want to have the intelligence to consolidate blocks more than I care about the specific storage technology.
What is storageless VDI?
Madden: Awesome -- the answer is that it's awesome. [With] storageless VDI … the actual hosts -- the actual ESX, or [Citrix XenServer], or Hyper-V servers that host your disk images -- [have] no hard drives.
The general idea is that you take everything that the user boots, and put it into RAM. [To understand] storageless VDI -- and there are a few vendors who are doing different versions of this -- think of the old days, in DOS [disk operating system]. Remember that RAM disk, where you could actually create a disk out of memory? Imagine how just ridiculously, blazing fast [it would be] if you took your C drive and put it into RAM.
You buy a server, let's say your VDI server is going to be 100 desktops, and 192 GB of RAM is how you would spec that out to buy your server and build it. Instead of buying it with 192 GB of RAM, buy it with 256 GB of RAM, and take that extra 50 GB of RAM and dedicate that to be a virtual disk in RAM, and then all the VMs can put their disk images (their disk blocks) inside that RAM. Remember … if a block is the same from machine to machine, it can be consolidated.
Now all your VMs can run, and they will run at full speed, but it is actually in memory. This is only while the machine is being used. The disks are stored permanently on some slow, traditional … back-end system. The idea is when a user boots up, it picks whatever server it should connect to, it boots up that VM, and then as it is booting up the VM, it's pulling all these blocks across the wire -- maybe that takes a minute or two to fill up this memory cache -- but then that VM is running from memory. You cannot even count the IOPS. It's almost like a short circuit; it's like a division-by-zero error. Your IOPS are a billion.