Networked storage is advantageous for any IT shop with a large farm of virtual servers, but your firm's optimal network storage options can vary based on the existing infrastructure, in-house skill set, application requirements and budget constraints.
A Fibre Channel (FC) SAN remains the system of choice for most large- and medium-sized enterprises with high performance demands, but less expensive iSCSI SANs and networked-attached storage (NAS) often make more sense for companies short on data storage expertise.
"Frankly, they all work OK, and in the real world, selecting iSCSI, Fibre Channel or NAS isn't going to be the deciding factor in performance or availability or features at this point," said Stephen Foskett, an independent enterprise storage consultant based in Wooster, Ohio, noting that VMware Inc. supports each of these systems. "I always tell people they should use whatever they're comfortable with, because at the end of the day, there's no reason to choose one over the other."
"This is a vendor-by-vendor [storage] product comparison, but it's not fundamental to the protocol choice," said Robert Passmore, a research vice president at Stamford, Conn.-based Gartner Inc. He noted that one NAS vendor may have more useful features than a particular SAN vendor, or vice versa.
Some companies elect to run mixed-protocol storage with virtual servers, favoring NAS for their frequently changing test and development environments and high-performing Fibre Channel for their production systems, noted Dave Henry, senior technical marketing manager at EMC Corp.
Each network storage option might have merit in a given virtual server environment, and many of the pros and cons tend to be the same as they would be in the context of a nonvirtualized server environment.
Below is a sampling of the key advantages and disadvantages that IT shops might want to consider when weighing the decision on the type of networked storage to use with their virtual server environments.
1) Ease of setup, operation and management. Companies that lack storage specialists don't have to learn any terminology or protocols with NAS. They can provision more easily, use their old, familiar network interface cards (NICs), cables and Ethernet switches and opt for cheap Gigabit Ethernet.
"You don't have to know anything about LUNs. You don't have to worry about disk-head contention," said Marc Staimer, president at Dragon Slayer Consulting in Beaverton, Ore. "It's just incredibly simple to set up with VMware or XenServer; a little harder with Hyper-V, but still easier than iSCSI or Fibre Channel."
"When you create a volume for a virtual machine [VM] using the Virtual Machine File System [VMFS] in VMware, that volume gets stored as a file, and it magically appears as a file in the NFS file system on the NAS device," Gartner's Passmore said. "Because it's there, storage vendors like NetApp and EMC and others are able to do individual snapshots, individual replication, individual restores and so on for the VM even though what's been supplied to the ESX Server is only an overall volume."
2) Straightforward expansion of file system, in comparison with VMFS on block storage. "Normally, VMFS isn't all that flexible as a file system," consultant Foskett said. "Let's say you created a 100 GB VMFS in VMware ESX and you need 110 GB. The only way to grow it is to add another LUN as another extent onto that VMFS, apart from migrating everything off and building a new one and migrating everything back on.
"With NFS, if you want to grow it, you just grow it," he continued. "You don't have to worry so much about the size."
VMFS currently has a 2 TB limit, although file systems can be grouped together.
3) Ability to process overlapping requests to the same disk at the same time, in contrast with the I/O queue in block-based storage. "With a virtual server environment, it makes a much bigger difference," Foskett said. "You can easily have 10 or 50 or 100 I/Os going to different files on the same disk at the same time, and that can potentially cause problems."
1) Potential performance hit with high-transaction workloads. "Performance is good enough with the exception of the typical things you wouldn't use NAS for," said Bob Laliberte, a senior analyst at Milford, Mass.-based Enterprise Strategy Group (ESG), noting that transaction-oriented databases might not make the best fit for NAS, especially in a virtual environment.
2) Delayed support for VMware advanced features. VMware often supports new features at least six months earlier in SAN technology than in NAS. For instance, VMware first supported Site Recovery Manager (SRM) with FC and iSCSI SANs "because those always tend to be more popular in the larger data centers," said Venu Aravamudan, a senior director of product marketing at VMware.
"Sometimes we look at where we see the most rapid adoption for some of these products or APIs, and that's where we'll start out," Aravamudan said. "That's more a function of limited resources on our side than any stated intent that one's better than the other. A lot of customers are choosing NAS, and that's one of the fastest-growing segments today."
But VMware still lacks support for NAS features such as its vStorage API for Multipathing. The NFS v3 version VMware currently supports permits only a single data path per NFS mount, EMC's Henry noted. Dragon Slayer Consulting's Staimer said he expects Parallel NFS (pNFS) support to address the issue in future VMware products.
"SANs do have the most robust multipathing today," acknowledged Vaughn Stewart, NetApp Inc.'s director of virtualization and cloud computing, via email. "NAS must rely on network redundancy technologies to meet the needs of link aggregation and path resiliency."
3) Traditional NAS knocks: CPU overhead, file system scalability. "The biggest issue with most file-based storage is how far you can scale it," said Dragon Slayer Consulting's Staimer, noting that scalability varies by vendor. "The number of objects a NAS storage system can handle tends to be somewhat limiting. When it hits its limit, basically the database is full. It doesn't take any more data. You can't read any data. And it doesn't give you a lot of warning."
Foskett noted that NAS is an intensive, high-level protocol that requires lots of decoding and translation and, thus, processing power. In a virtual server environment, physical servers hosting multiple VMs tend to use a greater percentage of their CPU than in the traditional server farm, where servers running single applications might use only 5% or 10% of their processing power, he said.
1) High performance/high-bandwidth performance, especially helpful for I/O workloads using large block sizes. The cost of 8 Gbps FC technology is now roughly equivalent to its 4 Gbps predecessor, so IT organizations can upgrade more cheaply to the newer technology. Enterprise FC-based arrays also tend to be mature, scalable and feature-rich with ample cache.
"It's the performance standard in the storage space," Gartner's Passmore said. "It's solid, stable and reliable -- very low latency."
2) Better security. Using an entirely separate FC network infrastructure gives large IT organizations an added level of trust. "It's more secure than Ethernet. It's harder to tap. It's harder to get data off of," Dragon Slayer Consulting's Staimer said.
3) Earliest support for performance-enhancing VMware features such as the vStorage APIs for Array Integration (VAAI) and vStorage API for Multipathing. NetApp's Stewart said, via email, that FC storage had trouble scaling without "some very complicated configurations" until VMware addressed some of the issues with its VAAI for hardware-assisted array locking.
1) Cost. Fibre Channel-based storage requires a separate network infrastructure and special IT expertise. "I would never advise anyone to go out and buy Fibre Channel if they've never touched it before," consultant Foskett said.
2) Provisioning and management not for the faint of heart. "It's a really difficult product to set up. It's a really difficult product to manage. It's a really difficult product to troubleshoot," Staimer warned. "You have to have a huge amount of knowledge."
FC LUNs tend to be married to ESX hosts and require special tools, such as VMware's Storage VMotion, to make changes. They can be manually intensive to operate and manage.
3) FC vs. Ethernet roadmap. IT shops with high bandwidth needs might favor the projected 10/40/100 Gigabit plan for Ethernet over the 8/16/32 Gbps roadmap for FC technology.
1) Low-cost, block-based storage. To keep expenses down, users have the option of software initiators to send SCSI commands over their existing IP network infrastructure to the target storage arrays. Those desiring a performance boost on the host physical server have the option to buy iSCSI host bus adapters (HBAs) that include a network adapter, a TCP/IP offload engine (TOE) and a SCSI adapter.
2) Ease of implementation; well-understood IP infrastructure. Companies can use their built-in, off-the-shelf NICs, native VMkernel iSCSI stack and any old Gigabit Ethernet switch to keep the setup really simple. Or, they can buy iSCSI HBAs and upgrade to faster 10 Gigabit Ethernet if they have I/O-intensive workloads. Either way, the setup tends to be easier than a Fibre Channel SAN.
3) Support for VMware's performance-enhancing VAAI and vStorage API for Multipathing.
1) Protocol overhead can slow performance, especially if I/O traffic uses large block sizes. "iSCSI has a bit more protocol overhead than a Fibre Channel [SAN] does, but it's much simpler to deploy and manage than a Fibre Channel one," said Randy Kerns, a senior strategist for Evaluator Group in Broomfield, Colo.
2) Less predictive performance than FC SAN. "You can get routed different ways on the same session, so your performance can vary noticeably. It can be great today, not so great tomorrow," Dragon Slayer Consulting's Staimer said. "That's the variance of TCP protocol. The iSCSI protocol has done things to mitigate that, to make it more predictive. But it does vary far more than Fibre Channel."
Staimer recommends users put in a separate LAN for their iSCSI SAN traffic so they don't wind up sharing the same network with standard IP client/server traffic.
3) Potential CPU overhead with software-based iSCSI initiators; greater storage management requirement with iSCSI HBAs.
This was first published in March 2011