While Xsigo Systems Inc. takes a software virtualization approach to I/O virtualization, the majority of vendors in the I/O virtualization space take a different path – implementing on the server
"Instead of having the PCI adapter in the physical server, you move the adapter out into a card cage," said Greg Schulz, founder and senior analyst at StorageIO Group, a Stillwater, Minn.-based research and consultancy company. "What sits in the server is a little stub card that brings the PCIe bus out to that card cage. You're essentially extending and switching the PCIe bus."
The I/O adapters in the card cage are standard off-the-shelf adapters you may already have. Each card-cage slot accepts any standard Ethernet or Fibre Channel (FC) network interface card (NIC), or SAS controller. The VirtenSys and Aprius devices then virtualize the adapters and present them back to the connected servers as if the cards were physically inside the servers. The I/O adapters are shared between the servers, and every server has access to every adapter.
Aprius I/O Gateway
Aprius will take a slightly different tack to I/O virtualization with its Aprius I/O Gateway, due to start shipping in mid-2010. Each Aprius I/O Gateway can connect 32 servers at 10 Gb/sec or 16 servers at 20 Gb/sec. Aprius is also aiming to sell its PCIe bus extender cards at less than $200 a pop. Pricing for the I/O Gateway has not yet been set.
Aprius' ability to share I/O cards with multiple servers will depend on the cards, said Craig Thompson, vice president of product marketing . "All the latest NICs present either multiple functions, multiple functions per port, or if they support SR-IOV [Single-Root IOV], they present multiple virtual functions," he said. "The richer the functions, and the more functions the cards present, the more we can do."
Thompson said Aprius plans to demo its system to select customers in the first quarter of 2010 and start offering products for sale by midyear.
NextIO takes a strict standards approach to I/O virtualization with its ExpressConnect product, which is a 14-port PCI Express Expansion and Virtualization I/O Module for blade systems. Mike Lance, NextIO's director of marketing, said the vendor contributed a good chunk of its intellectual property to PCI-SIG, and that information provided the basis for the Multi-Root- IOV (MR-IOV) standard. Each module slot can fit either an I/O connection or a server connection. You could mix seven servers with seven I/O adapters, or one server with 13 I/O adapters. Also, for now, the NextIO products are strictly for blade systems. NextIO is keeping the cost of its stub card as low as $20 per card, Lance said. A standard configured 14-slot switch costs $9,995. The server connections and daughter cards are separate.
VirtenSys VIO-4000 series
The VirtenSys VIO-4000 series of switches virtualize and share each adapter with every server, so every server could be connected with one card all at the same time. The VirtenSys VIO-4001 has four adapter slots and can connect up to 16 servers. The VIO-4008, which can also connect up to 16 servers, adds four more adapter slots for a total of eight supported cards. "We take the same I/O adapters, such as Intel Corp.'s or Broadcom Corp.'s 10 Gigabit Ethernet cards or QLogic Corp.'s Fibre Channel [FC] adapter, and we plug them inside our box," said Bob Napaa, VirtenSys' vice president of business development. Pricing for the VirtenSys products was unavailable.
Benefits of a PCIe bus extender approach
The PCIe bus extender approach has a number of the same benefits as the software virtualization approach, including fewer I/O cards, as well as less cabling, power and cooling. This approach also allows you to remap your I/O resources based on policies or unexpected circumstances. In addition, you can continue to use your existing I/O adapters because standard cards will drop into the card cage, StorageIO Group's Schulz said.
You can also use the PCIe bus extender approach to begin your Fibre Channel over Ethernet (FCoE) implementation without wasting a lot of assets on unknown, first-generation equipment, Schulz said. You could acquire a few first-generation FCoE cards, place them in the PCIe-approach card cage, and share them among multiple servers. That way you don't have to fork over a large initial capital outlay to obtain cards for every server. Once stable, next-generation FCoE cards are available, you can roll them out to each server outside your I/O virtualized network.
As with the Xsigo approach, the PCIe-based I/O virtualization devices have drawbacks -- mainly scalability and speed. Aprius is limited to 32 server connections at 10 Gb/sec or 16 server connections at 20 Gb/sec. NextIO's card cage has 14 total slots, though NextIO's Lance said the next-generation product will have 28 total slots, twice the performance and built-in MR-IOV support. The VirtenSys' boxes are limited to 16 server connections at 10 Gb/sec per link.
This was first published in January 2010