This tip is excerpted from a discussion thread posted on the SearchStorage website. The question was posed by user...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
RickLR, who asked what issues he needs to be concerned with when determining the proper logical unit size (LUN) for his virtual storage. Two users offered their approach to solving this particular problem. Ziggy S said using smaller LUNs can help, but there's a downside. However, Alan McLachlan said it's not quite as simple as that. We then asked storage expert Brien Posey to weigh in on the conversation. Brien offered his arguments around creating one large LUN or many smaller LUNs.
Original question posed by RickLR
On our IBM Shark, I can create one LUN over an entire RAID set that shows up at the host computer as one disk, or I can create several smaller LUNs and have several disks at the host computer. Which is better? I was told once that it was better to have multiple smaller LUNs than one big one since many smaller LUNs would allow more concurrent I/Os from the operating system (OS).
This would also apply to our EMC Symms with meta-volumes.
Ziggy S: Small LUN pros and cons
The advantage of many smaller LUNs is that you can spread them over many array groups and use Logical Volume Manager (LVM) to create your volume groups (concatenation or RAID 0). This will give you the best potential performance on the back end. It is not the LUN size, though. It is the ability to distribute the I/O across many array groups. On Shark, I would suggest that an array group has no more than four LUNs from the same RAID 0 volume group.
The downside of many smaller LUNs is that you eat up internal addresses. If you use a LUN size of 2 GB, the maximum capacity of your Shark is 8 TB (4,096 x 2 GB) no matter what size physical disk you use.
Alan McLachlan: LUNs and application server load issues
If you take the advice of Ziggy S -- "The advantage of many, smaller LUNs is that you can spread them over many array groups and use LVM to create your volume groups (concatenation or RAID 0). This will give you the best potential performance on the back end." -- you create more load on the application server. Whether a larger LUN or using LVM to concatenate or stripe smaller LUNs is the better approach will depend on a bunch of factors related to the applications, server capacity, caching and RAID performance on the subsystem, host OS file system management characteristics and so on. You don't want to make your back end potentially better, only to find it doesn't get driven to that potential because your host is slowing things down. I'm not saying you're wrong, just that it's not necessarily that simple.
If you have to run the LVM for ANY reason, it will add processing to all I/Os regardless -- so you might as well get the best you can from it. At the end of the day, server-side file system issues can subvert block I/O characteristics, so a little extra overhead at the block device layer can become moot (which makes you right in those cases as well).
Brien Posey: LUN size should be guided by requirements
The subject of which LUN size is the most appropriate has been hotly debated. As such, let me say up front that there is no such thing as an ideal LUN size that is appropriate for every situation. Everyone’s needs are different, so it is important to choose a LUN size that meets your own unique requirements rather than adhering to a blind standard.
There are a number of factors that must be considered when choosing LUN sizes. This list is not all inclusive, but it will give you some things to think about.
First, how do you anticipate the LUNs being used? If, for example, you're going to need a huge amount of space for file storage, it may be appropriate to create a single LUN. Similarly, I have seen situations in which a storage array was configured as a single LUN and then treated as one large cluster shared volume.
While I'm on the subject of how the LUNs will be used, it's important to think about any established best practices that have been put into place by software vendors. To give you a more concrete example, suppose you're going to use the LUNs to store virtual hard disks (VHDs). Some hypervisor vendors recommend you limit the number of active VHDs per LUN (static, unused VHDs don’t count).
One of the best arguments in favor of creating a number of smaller LUNs is backup and recovery. Imagine for a moment that a LUN experienced a problem and needed to be rebuilt and restored. It goes without saying that it is much faster to restore a small LUN than a large one (assuming that both are nearly full).
One last consideration is load balancing. Imagine you have a 32 TB storage array that you want to use for virtual machine (VM) storage. You have a lot of options for carving up the available storage. You could create a single 32 TB LUN, two 16 TB LUNs or four 8 TB LUNs. You could even create 32 single terabyte LUNs.
If you created a single LUN, it's possible that storage performance could drop off as more and more VMs are created. In VMware environments, you can load balance the VM workload by using Storage DRS. However, Storage DRS requires multiple source datastores. The more source datastores you have available, the more options Storage DRS will have for balancing the workload. Even so, you probably wouldn’t want to create 32 separate LUNs in this situation because it would be very easy for a single VM to exceed 1 TB in size. As such, you will have to evaluate your need to load balance with the size of your virtual machines.
Managing logical unit numbers in vSphere and Hyper-V
Everything you need to know about LUNs
Configure your LUN to up virtual machine performance
Hardware-based LUNs and metavolumes
LUN security with multiple HBAs in the same zone
Dig Deeper on Virtualization Strategy
Brien Posey, Alan McLachlan and Ziggy S. asks:
Are IOPS the most important consideration when selecting a LUN size?
0 ResponsesJoin the Discussion