Virtual I/O, also known as I/O virtualization (IOV), hasn’t proven as popular yet as server virtualization or even storage virtualization.
Most of the startups that offered the technology a few years ago are gone. Aprius Inc. and 3Leaf Systems went out of business, and Virtensys Ltd. was acquired by Micron Technology Inc. in January. At the time of the acquisition, Micron said it would use Virtensys’ PCIe virtualization technology to virtualize Micron’s solid-state drive (SSD) storage.
That leaves Xsigo Systems Inc. and NextIO Inc. as dedicated I/O virtualization vendors, although larger vendors use the technology with networks running their servers and connectivity. Hewlett-Packard Co.’s Virtual Connect is an example of vendor-specific I/O virtualization.
But while the options for I/O virtualization are dwindling, the need for it is not. Xsigo has been able to capitalize on the rise of virtualization in the data center and cloud computing as well as renewed interest in InfiniBand to gain traction with its Xsigo I/O Director, which connects x86 servers to any storage or network devices.
Enabling the cloud, through InfiniBand
Los Angeles-based cloud service provider dinCloud built its infrastructure with the help of Xsigo’s I/O Directors. Former dinCloud Chief Technology Officer Mike Chase said Xsigo’s use of InfiniBand was a major factor in the service provider’s choice.
Chase opened the service provider’s doors two years ago but didn’t immediately use the Xsigo equipment. “Six months in, we figured out that if we didn’t have InfiniBand, the whole project was dead,” Chase said.
Xsigo’s I/O Directors connect to servers either through 20 Gbps or 40 Gbps InfiniBand or Gigabit Ethernet (GbE) or 10 GbE ports, and they connect to storage and networking through Fibre Channel or Ethernet I/O modules.
Chase, who recently left dinCloud, said he installed two I/O Directors in each of his two data centers and connected them to Mellanox Technologies Inc. InfiniBand switches to aggregate the server connections. He used Cisco Systems Inc.’s Ethernet switches for his upstream connections and said he could connect more than 150 physical servers running the VMware hypervisor to each pair of I/O Directors. dinCloud uses NetApp Inc. storage.
More on I/O virtualization
“[The I/O Directors] are really the heart of the network,” Chase said. “InfiniBand was absolutely necessary for performance, speed and security. [I/O Director] reduces human error, it’s hypervisor-agnostic, and it virtualizes I/O so I only have two cables hanging out of the back of my servers to deliver 80 Gbps of network bandwidth.”
Chase said the I/O Directors gave him flexibility in managing and provisioning network resources.
“Because they virtualize the I/O, I can tell a server or the hypervisor how many connections and what speed and flavor they are,” he said. “For example, I can tell 1,000 servers, ‘You have four 1 Gbps connections, two 10 Gbps connections and two 40 Gbps connections.’ If I wanted some Fibre Channel [connections], I could [assign a server] a 1, 2, 4 or 8 Gbps Fibre Channel interface.”
Chase has since started a new cloud provider company, and he said he intends to install Xsigo I/O Directors and its Server Fabric suite software to directly connect virtual machines through virtual local area network (VLAN) or switch configurations for multi-tenancy.
“It creates transparent routing and also makes the connection completely isolated from the rest of the network connections,” Chase said. “This keeps our customers’ virtual private data centers completely isolated in our cloud while giving us ultra-low latency.”
‘We can’t go down’
Kroll Factual Data, a Loveland, Colo., verification services provider, also employs Xsigo’s I/O technology. Kroll Factual uses online transaction processing (OLTP) to verify income and asset information for clients in financial, government and property management industries.
Russ Donnan, Kroll Factual’s CIO, said he can’t afford any downtime because of the nature of Kroll’s business. “[My data center] can’t go down,” he said. “We have a lot of [service-level agreements] and, furthermore, we are in a lot of competitive industries. Nearly every customer we have has more than one place to go at any one given time for the same piece of data that they are looking for. So if we’re not available, they will hit our competitors and we will lose that deal.”
Kroll Factual has about 1,700 virtual servers running at any given time and completes roughly 200,000 transactions each day while maintaining 24/7 operations.
Donnan installed two Xsigo I/O Directors in his primary data center in Loveland and another in his disaster recovery site in Denver. He runs four InfiniBand connections from the I/O Directors to his InfiniBand switches connecting a Dell blade server to the network. Each physical blade server has two separate InfiniBand connections to the switches over separate paths for high availability.
Donnan said InfiniBand is much less expensive than high-performance FC and Ethernet links and easier to manage. Plus, there’s no need to upgrade his 40 Gbps InfiniBand pipe, as opposed to moving from 4 Gbps FC to 8 Gbps FC and soon 16 Gbps, to get the necessary bandwidth for thousands of daily transactions.
“When you’ve got a big data center to manage, you’re reducing your costs substantially by virtualizing over InfiniBand, encapsulating the same network traffic you had before over InfiniBand versus doing it with physical Fibre Channel or big Ethernet,” Donnan said.
He said Kroll’s network operations center (NOC) uses only one “pane of glass” to manage all of the network bandwidth flowing through the I/O Directors and all 1,700 virtual servers.
“It doesn’t require this big, huge team of Fibre Channel experts and storage experts and another big, huge team of Ethernet and networking experts,” Donnan said. “It’s a significant simplification. It relieves a lot of the personnel bottleneck that we used to have.”
Dig Deeper on Storage Network Virtualization