Top 10 requirements for effective server consolidation

Many companies have server consolidation projects underway, because although server acquisition costs are low, there is a high total cost of ownership.

This Content Component encountered an error

According to Gartner, over 60% of enterprise-level IT managers have server consolidation projects underway. The reason is clear: Although server acquisition costs are low, IT managers are often surprised by the high total cost of ownership (TCO). Forrester Research estimates that for each server, Windows licenses and maintenance costs total $5,800 per year. Furthermore, for IT managers, the real costs go beyond dollars. For them, server...

maintenance takes a personal toll, as nights and weekends often become the only times available to perform critical tasks and system updates. It's no surprise that these managers are looking for new solutions.

Consolidation promises to address both the cost and time issues. Simplifying the infrastructure results in fewer elements to maintain and better data availability. But to deliver on this promise, the infrastructure must be built on a platform that addresses not only today's issues, but anticipates the future concerns as well. Thinking about the following ten elements now, could help ensure that a consolidation platform truly delivers a solution, rather than just another set of problems.

Capacity and performance scalability
Your requirements are likely to grow over time, so make sure your solution accommodates your long-time capacity and performance needs. In most environments, capacities are growing 30% to 40% per year. As capacities expand, performance requirements typically grow as well. So check that the solution scales non-disruptively. Both performance and capacity should ideally be scalable without causing downtime and user disruption. That way, management costs remain low as the solution grows.

Management scalability
Most IT managers see storage requirements growing, but few have expanding headcount to support that growth. So make sure your consolidation solution accommodates increasing complexity without requiring a larger staff to manage it. If the solution grows within a single environment, it's a viable answer. If it creates multiple environments, it will ultimately require more staff.

Fit into existing processes
Your backup process is mission-critical and is probably not something you're anxious to change. So a solution must fit in your existing backup environment. It should also interoperate with your existing device management framework.

Integrated disk-to-disk backup
TheInfoPro reports that 60% of IT managers are considering disk-based data replication to streamline both data backup and data restore. A consolidation solution should integrate disk-to-disk (D2D) backup as part of the solution, rather than as a third-party add-on.

Storage provisioning
The single largest capital expense line item in storage is disk. So, it's a bit surprising that most environments see disk utilization rates well below 50%. In other words, more than half of the largest capital expense is wasted. The problem is that conventional architectures, with multiple "islands" of storage, make efficient capacity utilization nearly impossible. A consolidation solution should provide a flexible architecture and the tools needed to help you manage capacity,ltimately saving you money.

Performance provisioning
Load balancing is essential. Performance requirements change over time, so one server or NAS appliance may sit nearly idle while another device nearby runs overloaded. Poor response times become the ultimate result. A true consolidation solution should deliver load balancing, so you can fully utilize the resources you have before buying more.

True redundancy
When you're consolidating resources, your platform must have exceptional availability. Clustered pairs are good, but you should expect more. A solution that delivers multiple levels of redundancy gives you the ability to take a device out of service and still have fault tolerance on the remaining devices. This allows you to perform routine maintenance tasks during normal work hours. Look for a consolidation solution that delivers true redundancy, not just fewer single points of failure.

Tight windows integration
If you're using Windows, your user authentication procedures are well established. You will want to leverage that with a solution that supports active directory and access control lists (ACLs). If you have a mixed Windows/UNIX/Linux environment, look for a solution that allows users to log in just once to gain access to all files for which they're authenticated.

Low TCO
The initial purchase price is important, but low management costs are even more critical. Consider the costs of getting the system up and running and whether or not it fits easily into your existing environment. Does it leverage existing processes? What happens as the environment grows? A solution that integrates easily, leverages resources and grows seamlessly will provide lower TCO.

Open storage
Buying disk is a large, recurring expense. As your environment grows, you're sure to need more, so it just makes sense to leverage the disk you already have, whenever possible. Just as critical is the ability to keep your future options open. Look for a solution that supports multi-vendor storage across wide range of suppliers, so down the road you have the flexibility to buy the disk that best meets your needs.

Evolution of a consolidation engine
So which solutions meet these requirements? For many IT managers, NAS is becoming a popular approach, but most NAS solutions have limitations.

Conventional NAS
Early NAS solutions were standalone devices that combined hardware, a filer operating system and storage arrays into a single unit. This architecture, which is still the most common today, excelled on ease-of-use, but suffered two significant limitations. First, direct-attached storage severely limited the ability to reuse existing resources and to shop the market for innovative solutions. Second, scalability was limited. When a unit's capacity or performance capabilities were fully loaded, the only option was to add a new unit. This incurred significant management overhead, as users and data were migrated among the platforms.

SAN-attached NAS
When vendors attached NAS to the SAN, they took the first step towards addressing the issues. The new devices, called NAS gateways, incorporated processors and an operating system but no storage. All disk was separately housed in Fibre Channel-attached arrays. While this did allow for limited multi-vendor connectivity, it did not address the issue of scalability. Like conventional NAS, these devices still maintained a one-to-one mapping between the processing resources and the storage. That is, each gateway could access only the data managed by that gateway. If that gateway became fully loaded, the IT manager was forced to migrate users and data to another device.

Scalable NAS
The next and most exciting evolutionary step -- scalable NAS -- provides a new architectural approach to consolidation. Like traditional NAS, it's easy to use. But it goes far beyond conventional NAS, because it addresses both the scalability and open storage issues.

Scalable NAS on one hand works like a conventional NAS -- it integrates all hardware and software in one solution. It offers full support for Windows, Linux and UNIX clients and servers, includes snapshot capabilities and supports popular backup applications.

Scalability of the solution is designed in from the start. The architecture allows any filer to access any storage, so the traditional limitation -- servers being mapped directly to storage -- is eliminated. This decoupling is enabled by use of "virtual filers." To users, a virtual filer appears to be a complete NAS device, with a unique name, IP address and authentications. But to administrators, the virtual filer is a powerful management tool. A single scalable NAS device can accommodate numerous virtual filers, any of which can be transparently migrated among physical devices for load balancing, failover and system maintenance purposes. With this capability, the environment can now grow without migrating users or data and without expanding management resources.

Scalable NAS represents the next major step forward for NAS architectures. In addition to being cost-efficient and simple to implement, the platform is also designed to expand effortlessly to meet growing requirements. By addressing open storage and scalability needs, it delivers a new approach to server consolidation.

 


About the author: As Vice President of Marketing, Jon brings over 20 years of storage experience to ONStor. Prior to ONStor, he served at Maxtor as Sr. Director of Marketing, leading the marketing department of the company's Network Systems Group, a startup NAS vendor. Prior to that, Jon served for two years as Vice President of Marketing for Micropolis, a developer of hard disk drives. He also worked at Quantum as Director of Marketing, managing the strategic direction of the company's enterprise storage products, and at Seagate, where he served as an engineering manager. Jon holds a B.S. in Mechanical Engineering, a B.A. in Economics, and an MBA, all from Stanford University.

Dig deeper on SAN Virtualization and SAN Storage Consolidation

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchServerVirtualization

SearchVirtualDesktop

Close