This Content Component encountered an error

Storage virtualization-Data Storage Management

Storage virtualization best practices <<previous|next>> :Where do virtualization devices live and what do they do?

Virtualization Strategy

Integrating storage and server virtualization

By Rick Cook

SearchVirtualStorage.com

What you will learn about virtualization: Knowing your storage and server hardware and checking its compatibility is a critical step in integrating virtual servers and virtual storage. This tip offers practical advice for integrating virtualization products.

Storage virtualization remains a fairly new technology, and Windows server virtualization is even newer, yet it's taking time to make sure both kinds of virtualization can play well together. But, this is happening. Increasingly over time, if you encounter problems regarding their compatibility, those issues are likely to involve poor performance rather than complete failure. To ensure adequate performance, it is important to fully understand both the server and storage virtualization products you are attempting to integrate.

Of course, you've got to know what you're virtualizing. One of the first steps in any virtualization project, whether it's storage or server and especially both, is to conduct an inventory of the servers, storage devices and such that will be involved. This includes things such as the host bus adapters (HBA) and storage area network (SAN) switches, and the software and firmware revisions.

Check the hardware compatibility lists (HCL) for both virtualization products and make sure your configuration conforms. This is getting easier as virtualization vendors work to make their products interoperable. For example, VMware Inc., now owned by EMC Corp., is aggressively promoting its VMware Infrastructure 3, which ties VMware's ESX Server 3 and related products with storage virtualization, and associated hardware and software. Recently, both Emulex Corp. and QLogic Corp. announced that they now have HBAs that are supported by VMware's architecture.

It takes some additional steps to integrate virtual servers and virtual storage. (For one example, see this study, which was conducted to determine if using virtual storage in virtual server environments makes sense.)

It's best to start by virtualizing the lightly loaded servers, both for cost-benefit and performance reasons. If you have, say, three servers and each has less than 30% utilization, you'll see more immediate economic benefits by virtualizing them all on one server than if you start with heavily loaded servers running high-intensity applications. Also, your operation will take less of a hit while you iron out any performance problems than if you were trying to start with the heavily loaded server.

Performance issues
The surprising thing about modern virtualization is how little overhead it adds, but it still increases cost in terms of time, performance and capacity. Of course, this added cost is well worth it if you see benefits. However, you need to keep track of costs. That means you need to monitor the system's underlying performance with tools, such as Iometer and vendor-specific tools.

There are some kinds of storage you don't want to virtualize in a virtual server installation, notably those directly involved in running the virtual machines themselves. VMware, for instance, recommends keeping a VMFS (VMware's native file system) partition on either locally attached storage or LUN-0 (logical unit number) of your SAN to use for swap space. Since virtual machine systems rely heavily on swapping so it's important that this partition be sized and configured for best performance. For this reason, VMware doesn't recommend putting the swap partition on a network attached storage (NAS) device. For the same reason, the VMware kernel core dump partition (vmkcore) should also be locally attached or in LUN-0.

Do you know...

How to choose the right virtualization platform?

How to utilize the total capacity of all of your storage systems?

Rick Cook has been writing about mass storage since the days when the term meant an 80 K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last 20 years he has been a freelance writer specializing in issues related to storage and storage management.


19 Jul 2006

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.