LeftHand virtualized storage helps Texas school board with synchronous replication

Learn how virtualized storage from LeftHand Networks helped the Texas Association of School Boards (TASB) move toward synchronous replication between its two data centers.

Storage virtualization came part and parcel with the implementation of networked storage for the Texas Association of School Boards (TASB) as it sought to rein in and consolidate 10 TB of storage that were either internal or directly attached to 70 Hewlett-Packard (HP) Co. servers. The Austin-based TASB's short list came down to storage systems from EMC Corp., HP, LeftHand Networks Inc. and Xiotech Corp. before the IT staff settled on LeftHand Networks' Network Storage Module 2120 G2 (NSM 2120 G2) largely because features such as replication, snapshots and virtualized storage came bundled into the product at no extra cost. The LeftHand product was a prominent example of a class of arrays that feature embedded storage virtualization.

"We knew having these features sit on the hardware at the get-go would be easier to work with than adding them later on," said Tony Fowlie, technical architect at TASB. "That was a pretty good deal for us."

As an HP shop, TASB took a favorable view of the Oct. 2008 announcement that HP had signed an agreement to acquire LeftHand Networks. Ease of use and installation were other key considerations during TASB's evaluation process.

Read the entire storage virtualization special report
Storage virtualization projects: Factors to consider when implementing storage virtualization

Virtualized storage project: DataCore SANsymphony implementation no small task for healthcare firm

Virtual storage via HDS USP VM disk array helps Canadian city upgrade to VMware ESX Server 3.5

TASB's February 2009 wizard-based setup of four iSCSI-based NSM 2120s -- renamed HP StorageWorks P4000s earlier this year -- took approximately six hours, according to Fowlie. Three of the NSM 2120s/P4000s formed a single cluster in North Austin, and the other NSM 2120/P4000 was set up for disaster recovery 13 miles away in South Austin. Data is asynchronously replicated between the two sites over a 30 Mbps network.

The board made the cutting-edge choice to deploy 10 Gigabit Ethernet ( 10 GbE) within the North Austin cluster; a decision that factored into the only minor glitch during the initial phase of the implementation. The units had built-in Gigabit Ethernet connections, and plugging in 10 GbE cards changed the default interface for all connections, Fowlie recalled. The problem was easily fixed, and TASB was off and running.

Objective: Synchronous replication

In July 2009, TASB upgraded to a 100 Mbps network en route to its true objective of shifting to a Gigabit Metro Ethernet network so it could move to synchronous replication between its two data centers. The September 2009 upgrade to the Gigabit Ethernet network dovetailed with the purchase of two HP LeftHand P4500 storage systems, one for North Austin and one for South Austin.

TASB virtualized all of its networked storage into a single pool during the second phase of the implementation, which turned out to be far more challenging than the initial implementation. The IT staff connected two P4000s and a P4500 in North Austin and mirrored the cluster with two P4000s and a P4500 in South Austin.

With mirror sites, there would be no cutover in the event of a disaster, which TASB liked. The P4000 and P4500 would allow two-way replication between geographically dispersed systems, and storage virtualization would let the IT team view and manage the storage as a single pool.

"Instead of having to pick and choose volumes, we could set it up as a single storage cluster and then just ignore it and let it take care of managing the data replication and the size of the volumes," Fowlie said. "Every bit that is written to the three units [in North Austin] is at the same time written to the three units down in South Austin."

But performance started to drag at the end of October, and the IT team had to spend more than three months trying to identify and fix the problem. TASB wanted its servers to connect to the clustered storage systems in North Austin, then replicate to the units in south Austin, but it eventually learned that the servers were talking to all six storage systems.

Working closely with an HP/LeftHand engineer, the IT staff isolated the problem to an optional plug-in called the Device Specific Module (DSM) for Multipath IO (MPIO).

Fowlie said TASB had been told that if it created two logical groupings of nodes, the DSM would intelligently hand off data to the local nodes, which would take care of the replication; in addition, the system would retrieve data only from the local host.

"But it didn't quite work out that way," he said.

Chris McCall, a product marketing manager in HP's StorageWorks Division, said with clustered storage systems such as the P4000/P4500, a block of data is written to one storage node, the next block goes to the next node and so on, until all of the data is equally distributed. When an application requests a block from the volume, the storage system has to know where that block is.

"Because six nodes are operating as a single entity, that request will generally hit one of the nodes, and five out of six times, the data that request needs is not on one of those nodes," he said. "So, that request is relayed to the correct node that contains the data, and then that node responds back to the application. There's a hop that occurs within the cluster.

"DSM takes the map of where all the blocks exist on all of the nodes in the cluster, and it puts that map on the application server, so that when the application server wants something, it's going to request directly to the node that has it," McCall continued. "It eliminates that hop, and it makes performance linearly scalable."

McCall said he wasn't familiar with the TASB problem, but sophisticated multi-site setups such as the one at TASB require adherence to a number of best practices. For instance, latency between sites should be no more than 2 milliseconds; if it is, there will be performance issues, he cautioned.

Over the course of a weekend, TASB solved its problem by disabling the DSM agents on its Windows servers in favor of their native multipathing software, MPIO. The servers now talk only to the three North Austin-based storage systems, which replicate synchronously with the three-node cluster in South Austin. TASB's Fowlie said he recently learned of a management software fix for the issue, but they haven't explored that option.

With the glitch solved, Fowlie said he's been happy with the system. TASB now has 18 physical servers, all running multiple virtual machines (VMs) through Microsoft's Hyper-V technology, that make use of about 20 TB of the approximately 24 TB available with its six P4000/P4500 systems. The IT department plans to add two more units at the start of the next fiscal year, according to Fowlie.

"Personally, I couldn't imagine implementing any SAN without a layer of [virtualized storage] on it," Fowlie said. "Maybe we were spoiled having that right out of the box, but I can't see any benefit to not having it. It makes our lives so much easier."
This was first published in June 2010

Dig deeper on Storage Network Virtualization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

SearchStorage

SearchServerVirtualization

SearchVirtualDesktop

Close