Looking for something else?
Virtualization capacity planning can be a daunting task. You're creating virtual machines (VMs) on the fly while moving applications across virtual and physical resources. Ensuring enough storage capacity for all these moving parts now and in the future might seem impossible.
In this podcast, News and Features Writer Todd Erickson talks with Dave Bartoletti, a senior analyst and consultant at Hopkinton, Mass.-based Taneja Group, who explains capacity planning for virtualization. Bartoletti lists the steps in a successful capacity planning process, discusses how disaster recovery (DR) planning is different in virtual environments and tells us which vendors offer helpful capacity planning tools.
You can read the transcript below or download the MP3 to listen to later.
SearchStorage.com: How is storage capacity planning in a virtual environment different than in a physical environment?
Bartoletti: I'd say the No. 1 thing is that things move. Tiering, capacity planning and I/O estimates have to take into account that workloads aren't static in a virtual environment. You have to think about what your best and worst case consolidation ratios are going to be. How many workloads will share a particular switch or storage port under different conditions? I recommend thinking seriously about profiling not only what storage resources your workloads need when they're sitting still, but how those needs change when different workloads are consolidated onto a smaller set of servers. Also, storage capacity in a virtual infrastructure means both storage for the virtual machine images and the data they need. And if you're using server-hosted desktop virtualization, there's another set of considerations: what type of virtual machine images you'll need; whether you're going to use full images or clones; whether you're going to store user data separate from OS data -- all of these complicate the process of buying enough capacity. But what's most important is to understand how much tolerance you have for both waste and contention for resources.
SearchStorage.com: What steps are required for successful virtualization capacity planning?
Bartoletti: We encourage our customers to make capacity planning a best practice. Think about how you forecast your growth in number of users, in each user's data set, in application data sets. If you use an external facing application, you should also think about growth in new customers accessing the system.
One of the most common questions we hear people say as their environment grows is 'How many more virtual machines can I safely add to my existing storage infrastructure?' That sounds like an easy question. But depending on how variable your workloads are, or the access patterns of your users, it can be challenging to calculate the answers.
Build a test environment, model your different operational scenarios and scale each of them to match your typical workloads. And make sure to leverage vendor materials. VMware and Microsoft offer capacity planning tools for virtual environments on their websites, and their user groups and communities have excellent resources to help do initial sizing of various different workload types. Those looking at Microsoft should also explore System Center's tools and best practices, including what's available in System Center Operations Manager and the performance resource optimization tools. It can help you generate useful management reports, not only for what you're using today but what you used a week ago. So you can build some realistic tiering and capacity predictive tools that are specific to the SLAs and workloads you manage.
SearchStorage.com: What automated tools are available for virtual environment storage capacity planning?
Bartoletti: It's important to start with the tools provided by your hypervisor vendor and the communities that have risen up around the hypervisor vendors. VMware in particular has a lot of user-generated material that could be useful to help get an idea of your in-place utilization today, and how your utilization is growing over time. You need performance metrics that give you a snapshot over several weeks and several months, at least initially. Storage vendors, like Dell and NetApp, deliver amazing metrics with some of their storage monitoring tools. Dell's SAN HQ for EqualLogic iSCSI SANs is one of the best capacity planning and management tools I've seen for virtual environments. It allows you to treat your storage pools the same way you treat your virtual server pools and grow them incrementally, adding more capacity to storage pools on the fly without taking the environment down.
I also like Hyper9 and Akorri's tools. Hyper9 lets you profile your current workloads for performance issues and to build reliable capacity forecasting models. The key overall is to establish your key performance indicators first. Which metrics -- capacity, throughput, headroom -- are most important to you up front, and then explore the tools that help you generate automated reports for those the easiest. Most of these tools offer free downloads so you can play with them and see if they work for your environment before making a significant investment. What you're looking for with reporting is tools that give you both the big picture and then simple drill down into the specific performance indicators that matter for your environment.