Hitachi Data Systems was a storage virtualization pioneer back when storage virtualization meant managing different...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
storage systems in one pool. The debate then was whether that should be done in the array, the network or through software. Now virtualization presents a different challenge to storage administrators. It's about making storage work well with servers carved into virtual machines.
We spoke with Hitachi Data Systems (HDS) CTO Hu Yoshida about trends he is seeing in virtual data storage, such as virtual storage appliances (VSAs), converged stacks and why virtualization demands scale-up rather than scale-out storage.
Do you still think the best way to virtualize storage is with the controller in the array?
Hu Yoshida: When storage virtualization began, everybody was doing it with appliances sitting in the SAN. We took a different approach: We said, let's do it in a storage controller because we're the target with the initiators.
Also, if you have an appliance sitting in the middle, you don't have end-to-end authentication. We're the legitimate target, and we can do the virtualization in the target so we can do end-to-end authentication. Appliances may be an easy and cheap way to approach it, but I don't think it provides a total solution.
What about virtual storage appliances, which use software and servers to virtualize storage?
Yoshida: VSAs provide some of the functions to create a virtual storage environment. But doing it with software takes [CPU] cycles, so the more things you add, the more the software has to work and the more cycles it takes, and you start to top out.
If you give it to the storage controller with more ASICs [application-specific integrated circuits] and more hardware-assist like we do, you can scale beyond what software can do. And software appliances need to cluster and communicate with other appliances, so you have that overhead.
Most virtual appliances are not complete storage management solutions. They may give the appearance of one storage space, but can you do snapshots, can you do tiering, can you do replication, can you do all these other things associated with storage management? Typically, they don't. Or they add another appliance to do the replication or another appliance to do other things. Soon you have a kluge of appliances that provide the integrated solution you're looking for.
How do virtual servers change the way SANs scale?
Yoshida: We think the virtual server environment really is a scale-up environment and not a scale-out environment.
Many people talk about this virtual environment with virtual servers creating a need to cluster storage together to be able to scale to meet the needs of these virtual servers. And they're scaling out storage, but it doesn't make sense to me. As I scale a server by adding more virtual machines [VMs], they're coming through the same Fibre Channel ports. That means the I/O load on storage will scale up. You need to have scale-up storage and not scale-out storage to meet the needs for the virtual environment.
So how do you scale up rather than out?
Yoshida: The way we scale up is we add more cache blades, more processor blades, and more front- and back-end ASICS on our VSP [Virtual Storage Platform]. These virtual systems are becoming like mainframes, so they need mainframe enterprise class storage, and that means multiple processors processing through a global cache. It's not a bunch of separate caches clustered together. That's where we are different.
You can't take a bunch of modular two-controller systems and meet the needs for a virtual server environment that is scaling up and adding more functions like VAAI [vStorage APIs for Array Integration]. You need to do dynamic tiering and replication on top of that, and that will kill those two-controller systems. You need to have something that can scale up and use 16 cores going against the cache image.
There are several approaches to building storage for heavily virtualized server environments. Some vendors offer VM-aware systems, others use converged stacks of servers, storage and compute, and there is a class of "hyperconverged" stacks with hypervisors built-in. What approach does HDS take?
Yoshida: We have our Unified Compute Platform for virtualization workloads. We have our own blade servers and we combine that with our storage. A blade server can run as eight separate nodes or an 8-way SMP [Symmetric Multi Processor] or do LPARs, which no other x86 platform can do.
So we have our own blade servers, our own storage and our own orchestration software that can manage all this. We present that through a vCenter interface, and we don't need to go to our VSP interface. Most converged solutions, such as [VCE] Vblocks, come certified in one rack, and you can roll it in the door and spin it up, but when you go to manage it, you have to manage it through a Cisco server interface and maybe another interface for storage. Ours is one management, one orchestration through a vCenter interface. And our plan is to do the same with Microsoft and Linux hypervisors.
There was a time when a lot of talk around storage virtualization was about managing storage from different systems and different vendors in one pool. HDS enterprise arrays can virtualize other vendors' arrays behind them. Do your customers do that much?
Yoshida: They do that initially to migrate data. We have a lot of customers who start that way but eventually they will migrate over to our storage solutions. If we virtualize an EMC system, you still need EMC software to do the bin files and create the LUNs. Once they create LUNs, we can see those LUNs and virtualize them and do other things with them. But you have to configure the EMC system.
Eventually, customers want to be able to manage the back end like they do the front end. So eventually they will migrate to our storage on the back end as well as on the front end. They run in production with another vendor's storage until it depreciates, but eventually it comes off maintenance and becomes costly to maintain.