The purpose of I/O virtualization (IOV) is to solve performance bottleneck problems by allowing I/O resources to be divided and shared among servers and storage. Administrators can assign resources to certain virtual machines (VMs) to satisfy their individual I/O demands. Implementing IOV is achieved in a number of ways -- in the host or the infrastructure -- and in order to standardize the methodology, specifications have been developed.
One approach at the infrastructure level uses PCI Express (PCIe), a connection that is implemented at the server level that allows servers to share network interface cards (NICs). With the dwindling number of IOV vendors, those that are left, such as NextIO, are capitalizing on this approach, while vendors such as Xsigo use InfiniBand technology to establish NIC connections.
In this guide you will find an introduction to I/O virtualization, methods of implementation and specifications that will help to enhance your virtual data center.
Table of contents:
Virtual I/O has become one way to combat the I/O demands that the use of virtual desktops and virtual servers put on storage. Now that more high-demand applications are being virtualized, all virtual machines can't perform fast enough by sharing I/O equally. With IOV implemented at the network adapter, the NIC can be divided into multiple virtual cards and be dedicated to certain VMs in order to guarantee the performance levels they need. IOV at the switch level allows for individual management of VMs, which, among other settings, includes performance characteristics.
Another option, creating I/O gateways using PCIe, InfiniBand or 10 Gbps Ethernet adapters, can share one NIC between multiple servers. Learn how different implementations of virtual I/O work, and how to choose which method to use.
According to the founder of research firm Demartek LLC, Dennis Martin, the concept of IOV is similar to that of server virtualization -- displaying one piece of hardware as multiple pieces of hardware. One popular specification, SR-IOV, offloads the management of virtual machines from the hypervisor to the virtual card. MR-IOV, the second specification, uses an external PCIe chassis so that an adapter card can be shared by multiple VMs. In this video from TechTarget's 2012 Storage Decisions Chicago conference, Martin explains how these specs work with NICs, RAID controllers and FC host bus adapters.
Over the past few years, vendors providing IOV technology have gone out of business or been acquired by other companies. While the need for this technology is still present, Xsigo (which was acquired by Oracle in July 2012) has been capitalizing on it to leverage its I/O Director technology. Xsigo I/O Director relies on InfiniBand, a scalable communications link often used in high-performance computing. In this article, find out how Xsigo I/O Director improved virtual data centers for cloud service provider dinCloud, and verification services provider Kroll Factual Data.
While IOV provider Xsigo uses InfiniBand technology, NextIO, another popular vendor uses PCIe technology to support its IOV appliances. PCIe cards replace traditional server I/O cards, while the NextIO appliances use industry-standard input/output cards such as an Ethernet or Fibre Channel. This allows the I/O cards to be pooled and assigned and reassigned as needed to certain virtual machines. Check out this article to find out how NextIO's vNet, vCore and vStor I/O appliances work with PCI Express technology to enhance performance.
Check out these links for more on IOV
Understand the basics of I/O virtualization and converged I/O
PCIe-based I/O virtualization: Implementation, benefits, drawbacks
Best practices for initiating I/O virtualization