With the advent of hyper-converged, virtual SAN and server-side storage architectures and topologies, especially...
in connection with server virtualization, many companies now have to re-learn old lessons about caching. In theory, virtual SANs and the rest reflect a scale-out requirement introduced by the proliferation of virtualized workloads moving at will between physical hosting platforms for purposes of load balancing or high availability. Advocates claim SANs and other centralized storage repositories cannot respond to changing workload requirements with sufficient agility, so a more distributed alternative (aka direct-attached storage, with replication) is required.
Truth be told, virtualizing centralized storage can actually meet the specialized requirements of virtualized workloads rather adequately. Early storage virtualization pioneer DataCore Software determined nearly a decade ago a strategy for "adaptive caching" that continuously adjusted I/O speeds and latencies to meet the needs of applications (whether virtual or not).
According to DataCore Software's director of product marketing, Augie Gonzalez, "Adaptive caching technology senses the behavior of mixed I/O patterns arriving from virtualized servers and selects the optimal algorithm for each virtual disk, cognizant of the characteristics of the back-end storage pool. In environments using virtual SANs [vSANs], the caching takes place right next to the app, tapping into surplus CPU and DRAM resources on the same server, without paying the tax to go out over the network. DRAM caching is an order of magnitude faster than flash."
DataCore offers various implementation alternatives that can leverage both distributed storage assets and centralized storage pools, with auto-tiering orchestrating the allocation of capacity and performance. According to Gonzalez, "Through DataCore auto-tiering, additional caching can occur when combining vSANs with existing SANs, since the external arrays that supplement the internal server storage also benefit from inexpensive acceleration right next to the apps."
So, it is not at all certain that existing SANs must be ripped and replaced to facilitate the current fascination with virtual servers and hypervisors.
That said, many firms are pursuing new IT implementations where SANs are not a preexisting fact. They are intrigued by the possibilities of leveraging inexpensive server-side commodity storage united and coordinated by a software layer such as the storage microkernels of their preferred server hypervisor, or third-party vSAN software such as StarWind Software's Virtual SAN. (StarWind lays claim to delivering the first vSAN technology to market over a decade before it became popularized by VMware or Microsoft.)
Orchestrating disk resources, however, is not the same as orchestrating memory-based cache. And delivering on the promise of high availability that is part of the core value case for virtualization requires careful coordination of storage and cache resources. According to StarWind's director of technical marketing, Anton Kolomyeytsev, distributed cache is key to making vSANs workable and highly available. "For the workload to be able to transition from one physical host to another and to start 'hot,' data must be cached at each node that may support the workload," he said.
StarWind's Virtual SAN replicates/mirrors RAM between nodes, absorbing written blocks and replicating them from the RAM on one server to the RAM on a peer server. "In fact, this is a multilevel cache, since we use dynamic RAM as an L1 cache and flash [PCIe preferred] as an L2 cache," said Kolomyeytsev. This strategy, he noted, is different from that of Microsoft and VMware, which use flash to absorb writes, expediting the wear rate of the flash device.
StarWind suggests that the best way to avoid having to scale memory resources is to use them smartly in the first place. They use inline deduplication and small 4 KB blocks to increase RAM capacity, reduce load and increase IOPS for flash cache. Microsoft, by contrast, uses offline deduplication and wears out flash more quickly, while VMware uses no deduplication at all.
VMware's vSAN appeals to hyper-converged storage customers
Why virtualized storage infrastructure is a good option
Hyper-converged infrastructure offering expanded choices
Dig Deeper on Storage Network Virtualization
Related Q&A from Jon Toigo
Although software-defined storage and object storage can work in similar ways, there are significant differences between the two.continue reading
Traditional file storage can have a hard time handling today's massive amounts of metadata. Learn why object storage has become a popular replacement.continue reading
Large amounts of unstructured data and technologies like cloud have some asking if the traditional file system is up to the task.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.