Expert video: VMware RDM and iSCSI initiator for data accessDate: Feb 12, 2013
Raw device mapping in a virtual environment is used to create a direct connection to virtual machines. By taking advantage of the VMware RDM feature, I/O-intensive applications are given a performance boost because RDM uses commands from the existing SAN. If a virtualized environment's data access is achieved by using a virtual machine file system (the more common method), it can translate to server downtime and allocation of additional storage space. You can avoid this by using VMware RDM or mounting an iSCSI initiator. However, creating an RDM also means that access to some popular virtualization features will be lost. In this video from a TechTarget Storage Decisions seminar, keynote speaker Howard Marks, founder and chief scientist at DeepStorage.net, explains the benefits of using both VMware RDM (raw device mapping) and iSCSI initiators to access data.
One thing we have to think about is how we're going to provide access to data. [When] most people talk about VMware or Hyper-V, [they] talk about creating that clustered file system and using virtual machine disk files [VMDKs] for virtual hard disks [VHDs] to store all their data. However, you can install an iSCSI initiator in the guest to an external disk or create RDM from a guest to an existing disk.
Both of these let you mount an existing LUN. So if you have Exchange Server, and [it's] already running on a SAN, if you virtualize that Exchange Server -- you run the VMware converter, Microsoft converter or some third-party product to turn that physical machine into a virtual machine -- [and] if you just convert the C drive, you can then mount the existing storage in its existing place to where it's always been. [That way] that server [won't] be down for however long it takes to transfer all that data into a VMDK, or [you won't] have to allocate the space to move it into a VMDK.
In addition, because it's on the existing LUN, you can use storage array snapshots to protect that data because one LUN on a storage array is typically going to be a VMware data store or a Hyper-V cluster shared volume, and that shared volume will have multiple virtual machines in it. The value of things like array replication and array snapshots is substantially reduced. You can't tell your NetApp, EqualLogic or Oracle disk array to take a snapshot of a LUN and coordinate with the application to make sure that data is in an application-consistent form if there are 10 applications.
But if you use an RDM or the iSCSI initiator within the guest, now that LUN still belongs to one application and the coordination between the snapshot manager and [Volume Shadow Copy Service] VSS or the script you use to quiesce your application will be much simpler. In addition, if you want to use Windows clustering -- high availability through a Windows cluster -- Windows clusters in data stores aren't supported; but Windows clusters that share an external disk via an RDM or the iSCSI initiator in a host work just fine.
So RDMs create a VMDK, but that's just metadata. There are some support issues. You don't get to do Storage vMotion, you don't get to use some of the high-availability or fault-tolerance features of VMware, but if you're using Microsoft clustering to provide that level of high availability, you might not care about that.
Hyper-V pass-through LUNs are logically the equivalent. Because the virtual machine has access to the storage system, you can use storage management tools in the guest. Visibility to the array would be hidden if you were using a more conventional data store.