Hyper-converged infrastructure options simplify virtual environments
A comprehensive collection of articles, videos and more, hand-picked by our editors
VMware Inc.'s Virtual SAN software is now available and putting pressure on hyper-converged storage startups and...
traditional SAN vendors with its new approach to storage for a virtual infrastructure.
VMware Virtual SAN (vSAN) became generally available (GA) Wednesday as a software download or as a shipment on pre-packaged Ready Nodes from partners such as Cisco, Dell, IBM and Supermicro. VSAN allows customers to pool capacity and compute from servers running VMware's vSphere. Shared storage is set up and managed through policies available on a set of drop-down menus.
VMware said 12,000 customers were vSAN beta testers, and the virtualization vendor hopes to lure them to the GA product. Beta testers most likely to sign up as paying customers are those who like having their compute, storage and hypervisor in one box with one vendor responsible for support. And while VMware positions vSAN as mostly for virtual desktops, remote offices and secondary storage out of the gate, some beta users see it as primary storage.
Itrica, a Boston-based software developer and service provider for the medical industry, plans to use VMware vSAN to replace its traditional SAN storage from EMC and other vendors, Chief Technology Officer David Sampson said. Sampson said vSAN lets Itrica use storage and compute it already owns and scale them together.
Sampson said he didn't use the vSAN beta for production data, but replicated his production workloads to vSAN clusters for testing. He said Itrica set up clusters between four and eight nodes for testing, typically achieving between 20,000 IOPS and 30,000 IOPS per node.
"This solves a tremendous problem for us," Sampson said. "It takes away any third-party requirement around virtual storage."
He said vSAN will be a big piece of the provider's storage beginning next month, both for production and secondary storage. "We can tier the disks inside each cluster to create production-level storage clusters, as well as Tier-2 and object storage clusters that don't require the same performance," Sampson said. "We already use a lot of SSD [solid-state drive] technology, so we we're able to put together higher-performance storage clusters, but then also put together other storage clusters that have almost zero fast drives for backup and archiving."
Sampson said Itrica has approximately 1 PB of storage in each of its data centers in Las Vegas, Miami and Boston. He said vSAN will eventually be used for almost all of Itrica's storage, with the possible exception of an all-SSD SAN or PCI Express flash-based storage for customers with extreme performance needs.
Sampson said he considered earlier hyper-converged storage appliances on the market, but didn't want to add hardware. He said buying storage from his hypervisor vendor gives him "one-stop shopping, one-stop management and one-stop support" that he can't get from another vendor.
"We looked at other third-party converged storage options," Sampson said. "We found they were using too much processor or too much RAM and were taking away from our ability to deliver services.
"We felt that a hardware-based system doesn't give us maximum flexibility," he continued. "The fact that vSAN is integrated directly into the ESX kernel makes it so efficient with low utilization of resources."
The Doe Fund, a New York-based group that runs housing and job programs for homeless and unemployed people, ran the vSAN beta in an off-site disaster recovery (DR) location. Ryan Hoenle, the Doe Fund's director of IT, said he plans to upgrade to the GA code within days and use the vSAN for primary storage.
"It [has] been stable and done exactly what I've needed," he said of his beta setup. "We'll move our production environment to that DR site and replace our old environment with vSAN."
He said until now the Doe Fund's storage consisted of Dell iSCSI SANs and DAS. He runs vSAN on white boxes from Supermicro. He has four nodes, each with eight 2.5-inch drives, two 1 TB SSDs and 32 GB of RAM. The total storage capacity across the four nodes is 48 TB, with the flash used to cache reads and writes.
"When budget allows, we'll replace what was our old environment with another vSAN cluster," Hoenle said.
He said the applications that will run on vSAN include Blackbaud Raiser's Edge and Financial Edge fundraising and accounting software, a Voice over IP private branch exchange, file shares, print servers, Microsoft Active Directory and a Microsoft SQL-based database.
Like Itrica's Sampson, Hoenle said he likes that vSAN runs on the hypervisor he's already using. "It's nice to have storage and compute in the same boxes; it saves us money," he said. "If I'm going to a hyper-converged or software-defined data center, I like that this runs on the hypervisor we've standardized on. It's nothing new that any of our IT people have to learn."
Dedupe, encryption, failure domains still missing
As a beta product, VMware's Virtual SAN lacks some sophisticated data and storage management features of traditional SANs. Itrica's Sampson said he would like to see VMware add deduplication and encryption. Doe Fund's Hoenle said he wants to see VMware support failure domains for high availability.
"It would be nice to replicate over a fiber link across two buildings and know that it will have the appropriate replicas in both buildings," he said. "If I have two boxes in one building and two boxes in the other building, two of those replicas might be in the same building. Currently, you don't get to tell vSAN where the replica occurs."
While VMware provided all of vSAN's technology details during a webcast last week, it waited until the GA announcement to give pricing -- $2,495 per processor for vSAN alone and $2,875 per processor for vSAN bundled with VMware vSphere Data Protection Advanced. VSAN for Desktops is $50 per user.
Hoenle and Sampson said pricing was in line with their expectations.
"This is absolutely cheaper for us than a SAN," Doe Fund's Hoenle said. "We can use a white box and cheaper near-line disk because the SSD caching for reads and writes allows us to get away with that."