Feature

VMware shared storage environment offers unique challenges

A VMware shared storage architecture offers a wide range of benefits but also places new demands on your existing storage configuration. This article explains the major challenges and chief strategies for building shared storage architectures that are compatible with your virtual server infrastructure.

VMware has brought new levels of flexibility and efficiency into the data center. Not only can many physical servers be consolidated onto a single VMware host, but those virtual servers are transportable to other hosts, other storage systems and even other data centers. Furthermore, this can all be done transparently without applications experiencing any downtime. For the data center to fully leverage these capabilities, a shared storage architecture must be implemented alongside the virtual server infrastructure.

The shared storage infrastructure that accompanies a virtualization deployment needs to accomplish several objectives:

  • It must provide the shared connectivity that a cluster of virtualization hosts requires to execute functions such as virtual machine (VM) migration and off-site replication.
  • The storage system must be able to handle the highly randomized I/O consolidated servers tend to generate.
  • It must be cost-effective to purchase, scale and operate.

Since it can directly impact the investment costs of the solution, one of the first considerations when designing a VMware shared storage infrastructure is to decide which protocol to use.

The VMware storage challenge

As mentioned above, VMware places unique demands on storage. No longer is application storage traffic uniquely associated with a particular server. The application could be one of dozens of VMs within that server. This creates a very random and continuous I/O traffic pattern that prior generations of storage systems did not have to contend with. As a result, the storage system and the network it uses to connect to the physical hosts have added pressures on them that directly impact performance and operation.

These pressure points have caused the shared storage infrastructure to negatively impact the ROI of the virtualization project. As a result, IT planners are looking for new ways to drive down the cost of the storage investment while still meeting these new performance demands. Since it can directly impact the investment costs of the solution, one of the first considerations when designing a VMware shared storage infrastructure is to decide which protocol to use.

Selecting a storage protocol

Vendors have expended a lot of effort trying to convince users which protocol they should use for their VMware storage infrastructure. The typical protocol choices are Fibre Channel (FC), iSCSI or network-attached storage (NAS) via NFS. While each of these protocols has its pros and cons, the primary motivation for many buyers is to reduce cost.

Fibre Channel is still, by far, the most dominant storage protocol in use in VMware environments. The latest study by VMware shows FC with a 70% or greater share of the market; however, most industry observers and users consider FC to be more complex and expensive than either of the Internet protocol (IP)-based protocols. FC tends to win the selection process not because it delivers on cost savings, but due to its performance-tuning capabilities and universal support for FC among the vendor community.

The IP-based protocols such as iSCSI and NAS (NFS) are growing in popularity because of their cost advantage and potential ease of use. Since both of these protocols run on traditional Ethernet cabling, there is no need to learn a new protocol, implement a different cable design or buy new switches. A virtual LAN that routes only storage traffic can be easily created, and existing server Ethernet networking cards can be utilized.

One challenge is that both of these protocols have an overhead associated with them as they process IP traffic. ISCSI has an added latency disadvantage because IP traffic needs to be converted to SCSI. The reality is that many data centers do not have performance requirements where this overhead would be a potential problem. However, as data centers continue to grow and VM densities are pushed higher because of ever-increasing processing power, the overhead required by IP-based protocols may become an issue.

Many of these overhead problems can be dealt with by adding network interface cards (NICs) with iSCSI and/or IP offload capabilities. There are also IP NICs that can be installed in servers to provide Quality-of-Service capabilities to each individual VM on that host. This essentially ensures bandwidth to mission-critical VMs. Of course, adding these cards increases costs and complexity.

To sum it up, as the environment scales, so does the complexity of that environment, regardless of the protocol selected. IP-based protocols have a clear advantage when a virtualization project kicks off simply because they are already present in the environment. Quite often there is no need to abandon the protocol as the project scales, but increased cost and complexity will be introduced. In fairness, this should be offset by the savings that a more densely packed VM architecture will deliver.

SSD versus hard disk drives

The next key decision is how to leverage solid-state drives (SSDs) in the environment. SSDs allow the storage infrastructure to better respond to the random I/O problem described earlier. These are not rotational devices, so they have direct access to the data and the number of simultaneous requests made to them does not negatively impact performance like it does with a hard disk-based system. Unfortunately, SSD performance comes at a cost premium, so they are typically used sparingly as a caching or tiering strategy.

Local SSD versus shared SSD

Almost every virtual environment can benefit from a number of SSDs. The question to ask is, "Where should solid-state storage be leveraged?" Typically, there are two options: as part of the shared storage system or installed locally on the servers hosting the VMs.

The location of the SSD is also an architectural decision. Where SSD is used can impact how the storage network is designed and what type of storage controller is used. If local SSD is used, it is often combined with caching software that caches read traffic, essentially eliminating 50% or more of the storage I/O traffic that needs to traverse the storage network.

Running the caching application on the local host has its downsides. It consumes CPU resources and local caching may create problems with VM mobility. Second, each host may not effectively use all the available SSD capacity. Finally, the caching software has to make sure that eviction occurs prior to the VM being moved.

Running SSD on the shared storage device may require a faster network and a more powerful storage controller to fully exploit the performance potential of memory-based storage.

Effective VMware storage architecture design plays a large part in a successful virtualized server implementation. Almost every data center, no matter the size, will eventually encounter storage performance bottlenecks in their virtual server deployments. Indeed, the design of the architecture will have a significant bearing on VM and hypervisor host density, and on the number of hypervisors that can be supported by the storage infrastructure itself.

About the author:
George Crump is a longtime contributor to TechTarget, as well as president and founder of Storage Switzerland LLC, an IT analyst firm focused on the storage and virtualization segments. Before founding Storage Switzerland, George was chief technology officer in charge of technology testing, integration and product selection at one of the largest data storage integrators in the U.S.


This was first published in August 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: