Many IT shops jumped into the world of server and desktop virtualization without a full understanding of how the new infrastructure would impact storage resources. As a result, many are being guided by vendors to take on a forklift upgrade or, in some cases, revert back to a direct-attached storage (DAS) model. Industry expert Jon Toigo examines these issues in an excerpt from his presentation “What’s Killing Server and Desktop Virtualization” at a Storage Decisions New York conference. In it, he describes the two tasks that every IT shop needs to do to fix the data storage problems that server virtualization caused.
View the video or read the transcript below.
Toigo: Server virtualization doesn’t change the genetics of the underlying infrastructure. In fact, in a worst-cast scenario, it masks them from view. It makes it harder for you to manage the actual physicality of the universe in which your I/O is operating. You don’t have [much] visibility into your infrastructure. Now, part of the solution may be more virtualization. I hate on virtualization, right? No. I hate on stupid virtualization. I think there’s smart virtualization out there. Let’s start with storage. It’s the biggest sticking point because it costs the most. Why? Storage currently accounts for 33 to 70 cents of every dollar spent on IT hardware. [It’s a] big deal.
The most commonly cited factor in stalling virtualization projects is “We didn’t know what it was going to do to the storage infrastructure or how much it was going to cost to retrofit our environment.” Gartner predicts now that server virtualization will drive your storage acquisition requirements [up] by 600%. Think about that. [You’re going to need] six times what you’ve got deployed today … and this is within the next three years, just to accommodate the fact that you went with this bright, shiny, new virtualization model.
Is there a forklift upgrade in your future? … That’s usually what you’re encountering because even the vendors who sold you your SAN are now telling you you’re going to have to break up the SAN and go back to DAS. DAS is the only sure way to fix this virtualization problem.
Server virtualization makes the issues of storage which have always existed more pronounced and more costly and more dangerous because if one app fails, they all fail. Basically we have had three challenges in storage since the beginning of time. One was capacity management, another was performance management, the third one was data protection management. It’s what I call the meta management tasks. This is over and above the SRM level, where we’re just trying to keep the plumbing straight and make sure the disks are spinning, where we want to protect the data assets and we want to allocate the capacity and the other resources appropriately.
Server virtualization introduces new requirements for capacity scaling that didn’t exist before. Why? Because you consolidate a whole bunch of workload on one server and you’re pointing at specific sets of disk spindles. Secondly, you’ve got a need to adapt to changing access profiles. If you do vMotion and you move the workloads somewhere else, are you moving the storage too? Are you moving the data too? Are you having to replicate the data everywhere? How much of your storage is going to be wasted on making replicated copies? How many replication processes are you going to have running, and how are you going to manage them all in a coherent way? With most mirroring schemes, we can’t even test a mirror unless we break it. You have to stop the mirror and do a consistency check on both sides in order to test it. That’s a … hassle. Nobody breaks their mirror. That’s seven years of bad luck if you can’t get it started again. And then, reliable data replication, all in a manageable way. … To fix this virtualization thing from a storage perspective, we just have to fix all the endemic problems that have always existed in storage. Not a problem! We’ll get right on that.
Some folks are looking for an easy way out. They say, “We’re disconnecting from the SAN, we’re going to break that down and just do DAS now.” That’s an interesting model. The problem is, you have all this replication stuff, and most of the hardware vendors that are out on the [Storage Decisions event] floor today, the only way they’ll replicate is if they’re talking to another box with their name on the bezel plate. Which means you’re locked into a vendor if you’re going to do hardware-based replication behind the scenes. Is that an issue? Maybe not. It depends on what your budget is. There aren’t software replication schemes that anybody’s really practicing right now. VMware’s going to fix that for us, you’ll see.
What I’d really like to see is for us to get past this error of heterogeneous rigs with isolated value-add on their controllers, talking through a very complicated plumbing infrastructure to a bunch of virtual servers that really are calling the shots, each requiring specialty device drivers.
One more thing: There’s no coherent management; every single box has its own proprietary management mechanism. So if you’ve got that heterogeneous environment, you’ve got to manage each box individually.
That’s the world that exists today, and all we did was throw all these virtual boxes up at the top. Now we’re radically changing workflow and we’re moving workload from box to box at will. That changes the dynamics of how we manage the infrastructure. All that stuff is hard coded—all the worldwide addresses and worldwide naming schemes and the IP addresses on your iSCSI boxes, etc. You’ve got to keep all that stuff straight, but it’s inflexible. It doesn’t change in accordance with what the demand is.
Storage virtualization may provide a solution. We abstract away from the array controller on the individual boxes, the high-level functions – the capacity management, performance management, data protection management – and we put it all into a virtual pool. Then we can create pools of storage that move with the guest machine. They don’t physically change direction. Those are all still boxes down below. They’re hard-wired somewhere. But at least the virtual volume that they’re addressing can move with the guest machine. That’s kind of a cool solution. That works pretty well. It gives me a lot of value there. But of course it doesn’t solve the underlying problem of managing the underlying infrastructure. [With X-IO], I can manage petabytes of storage … because they’re using Web services and RESTful management. What does that mean? It means that their boxes talk like websites. And they’re actually getting to the point now where they friend each other. You drop another one of their boxes into the infrastructure … and it says, “Hi. You’re a Hyper ISE? I’m a Hyper ISE too. Want to be friends? Want to share your capacity? Hey, I’ve got some bandwidth you can use.” And they friend each other. The idea is you create an atomic infrastructure based on building blocks of storage that as you add storage to it, you can grow, scale the capacity at the lower level and it automatically places itself, and its plumbing becomes more accessible because it’s managed via REST. I can take an iPad, I can take a cell phone, I can take a smartphone, I can take a desktop -- anything that supports a browser -- and I can look at how the storage is allocated. In fact, with a new Apple that I just saw at Coretex Developer, I can point my finger and drag some capacity over to a guest machine and drop it there. And boom, I’ve allocated storage to a guest machine. It’s pretty cool stuff. Take a look at it.
Now, those are the basic things that you can do to fix your storage issues. You virtualize for capacity management, [and] you rebuild with Web services-based management components, so you’re not dealing with individual boxes; you’re managing in a holistic way. And, I don’t care if you’ve done virtual servers or not. Even if you’re not pursuing server virtualization, you want to drive cost out of your infrastructure and improve its efficiency for storage, do these two things. You should be doing them anyway. It’s simple—easy peasy.
Jon Toigo is CEO and managing principal of Toigo Partners International LLC, an independent consultancy and IT research and analysis firm.
This was first published in March 2012