This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
3. - Latest advancements in storage for VDI: Read more in this section
Explore other sections in this guide:
Virtualization has clearly changed the way storage interacts with the rest of the IT environment. First, the rise of virtual machines (VMs) had a big impact on storage. Now virtual desktop infrastructure (VDI) adoption is changing the way storage systems are architected and managed.
A VDI environment poses unique challenges for storage administrators because system performance is much more important than capacity. In a typical VDI environment, a virtual desktop is run as a VM on a central server. The user accesses the desktop from a thin client, which can be low-cost hardware or a traditional PC. Because the desktop and applications are centrally stored instead of residing on the local device, the demand on the storage system is exponentially greater.
Fulfilling the data requests of hundreds or thousands of virtual desktops places extreme stress on the storage system. VDI performance is driven by the number of input/output operations per second (IOPS) the storage system can execute. VDI storage obviously requires enough capacity for application and user data, but the end-user experience is ultimately determined by how rapidly the data is served.
Matching physical desktop IOPS with a virtual environment is one of the biggest roadblocks to more widespread VDI deployments. Storage bottlenecks and the price to alleviate them have been the major VDI obstacles.
“It has been the storage side [of VDI deployments] that has been the most troublesome,” said Sam Lee, a senior sales architect with IT services provider Force 3, in Crofton, Md. “You can contain and manage CPU and memory performance, but managing storage performance is very difficult.”
To address the IOPS issue, the larger storage vendors are adapting their existing systems by adding solid-state drives (SSDs) to handle I/O operations while using traditional spindle disks for back-end capacity. Some storage vendors are partnering with computing, networking and virtualization hypervisor providers to offer preconfigured bundled architectures or products tuned for high performance.
More on VDI performance concerns
Firm chooses Nutanix storage appliance to address VDI performance hurdles
However, we’ve also seen a rush of new storage systems -- usually from startups -- that target VDI or other high-IOPS environments. These often consist of appliances containing storage and servers and always use flash -- either in all-SSD packages or hybrid SSD and hard drive boxes. Software and operating systems are also key ingredients to make data processing more efficient to get the most performance out of spindles.
According to Mark Bowker, an Enterprise Strategy Group senior analyst, VDI was a top-10 IT spending priority for both 2011 and 2012. Most organizations interested in VDI are now in the research and proof-of-concept stage, Bowker said.
“The goal of a VDI deployment should not be focused on the infrastructure. It should be focused on the end-user experience, rapid scale, good performance, reliability, resiliency and disaster recovery,” Bowker said.
Here is a look at storage systems and bundles that go beyond traditional SANs to target VDI storage.
EMC Corp., NetApp Inc. and Hewlett-Packard Co. (HP) have predesigned bundled architectures with storage, compute and networking in one tidy package. Both EMC and NetApp partnered with VMware Inc. for the virtualization hypervisor and Cisco Systems Inc. for the networking gear. EMC, however, made its Vblock stack as one product sold through its Virtual Computing Environment (VCE) joint venture with VMware, Cisco and Intel Corp. NetApp, meanwhile, offers its FlexPod bundle as a reference architecture for customers to put together, although they can buy all the pieces from NetApp or its VARs.
HP also offers a bundled solution designed for VDI environments, consisting of all HP hardware. HP’s CV2 is based on the HP BladeSystem architecture and backed by HP’s LeftHand P4800 SAN for storage. “The approach we’ve taken to VDI is that we started with reference architectures for VDI, and most recently we’ve turned those reference architectures into the HP VirtualSystem CV2,” said Mike Koponen, HP’s worldwide solutions marketing manager. HP developed two Client Virtualization (CV) products: one for Citrix XenDesktop and another for VMware View.
A common path for storage startups in recent years is to go after the relatively new problem of VDI storage with one of the hot new enterprise storage technologies. So there are a good deal of new storage systems that use solid-state storage and are targeted at virtualization and VDI.
GreenBytes Inc. approaches VDI with two systems, one mixing SSDs and hard drives and the other an all-flash system. The HA-3000 scales from 26 TB to 78 TB of SAS drives and includes at least 200 GB of SSD cache and either eight Gigabit Ethernet (GbE) or four 10 GbE ports. The SSD is used as a cache to accelerate read and write IOPS for VDI deployments. Customers can add more SSDs in the 3U 16-bay system.
GreenBytes recently launched its Solidarity all-SSD array with hot-swappable dual controllers, scale-out nodes, transparent failover, and inline real-time primary deduplication and compression. Each node is available with 240 GB, 480 GB or 960 GB flash drives, and the array’s total capacity ranges from 3.5 TB to 13.44 TB.
Chris McCall, vice president of marketing for NexGen Storage Inc., said the startup placed the solid-state technology on the PCIe bus instead of using SSDs in the array because it’s faster and allows more space for high-capacity drives. The n5’s performance management functionality allows administrators to set performance levels per volume and designate each volume’s service level. So an administrator can assign the number of IOPS a volume will use and prioritize the volumes when something inevitably fails. It also allows administrators to temporarily increase the amount of IOPS a specified volume will get for tasks requiring high performance, such as VDI boot storms.
The Nimble Storage CS series is another hybrid system that uses flash and disk for VDI storage. To better deal with random writes, the CS software aggregates and compresses random writes as they come in to the storage system, then sequentializes them before writing to flash and disk. The flash is essentially used as a cache. According to Radhika Krishnan, Nimble Storage Inc.’s head of solutions and alliances, the CS series employs a log structure file system, “which gets you write optimizations in addition to read optimizations, making it ideally suited for VDI workloads,” Krishnan said.
The Nimble CS series consists of seven models with capacities ranging from 8 TB (CS210) to 48 TB (CS 260G). The amount of flash in each system ranges from 160 GB (CS210) to 2.4 TB (CS260G). The CS210 includes four Gigabit Ethernet (GbE) connections, while all other Nimble models have six 1 GbE connections or two GbE connections and two 10 GbE connections.
The Nutanix Complete Cluster also uses a combination of flash and hard drives with servers all in one enclosure. Each server runs a standard hypervisor. The Nutanix Distributed File System (NDFS), which the vendor said was “inspired” by the Google File System, creates a pool of storage from all nodes in the cluster and handles striping, replication, auto-tiering, error detection and failover.
A Nutanix Complete Cluster consists of 2U blocks, each containing four server nodes, each with 320 GB of Fusion-io PCIe card flash, 300 GB of SATA SSDs and five 1 TB SATA hard drives for 1.3 TB of PCIe flash, 1.2 TB of SATA SSDs and 20 TB of hard drives per block. Blocks can be clustered for scalability. Each block has four 10 GbE and eight GbE ports.
Pivot3 Inc. released its vStac storage-and-compute bundle for VDI last year. The vStac VDI Appliance includes the vStac OS, which combines virtual servers and a scale-out, block-based storage software system.
According to Pivot3, the appliance approach to VDI storage provides storage and compute optimally configured for virtual desktops and predictable scalability. As with a scale-out NAS system, customers can add nodes to add capacity in a clustered fashion.
Each 2U vStac VDI appliance supports up to 100 desktops and includes two 10 GbE ports, 150 GB SSD and 3 TB disk storage and 96 GB of RAM. Pricing starts at $38,500 per appliance.
The Tintri VMstore series also uses both SSDs and back-end spinning disk. According to Chris Bennett, Tintri Inc.’s vice president of marketing, the VMstore runs all of the storage I/O through flash and uses the back-end disk for data less reliant on performance, such as snapshot storage. The VMstore series uses inline deduplication and compression to reduce the amount of data that sits on flash.
The Tintri VMstore comes in two models. The single-controller 4U T445 offers 16.4 TB of raw capacity (8.5 TB usable), dual GbE ports and one dual-port 10 GbE port. The dual-controller 3U T540 has 26.4 TB of raw capacity (13.5 TB usable) and two GbE ports and two 10 GbE ports per controller.
WhipTail Technologies offers an all-flash-based storage array. The company said its 2U XLR8r SSD storage array provides 250,000 writes per second and 1.9 Gbps throughput. Whiptail offers four models with capacities from 1.5 TB (the WT1500) to 12 TB (the WT12000).
Brian Brothers, a network administrator manager with the Ohio Department of Developmental Disabilities (DoDD), said his 6 TB WT6000 gives him upward of 230,000 IOPS for his 1,200-user VDI deployment. He also has the 3 TB WT3000 at his disaster recovery (DR) site about 5 miles from his primary data center.
Brothers first tested a mock VDI deployment that had a combination of SSD and hard disk drives, but it couldn’t support the scaling he needed. Brothers said he uses his Whiptail SSD SAN for the majority of his VDI demands and uses his traditional SAN for data storage.
“One of the things that we wanted going into this [VDI deployment] is that the experience with the VDI had to be the same or better than what they did on desktops,” Brothers said. “Otherwise, people won’t like it.”