Hyper-V Storage Enhancements & What They Mean for Users – Part 3: Cost Comparison of SMBv3 vs SAN Expansion

Data storage
By Lawrence Garvin, Head Geek, SolarWinds®, Virtualization & Storage Management

This is the final part of a three part blog series. Also refer to the earlier blogs:
Part 1: Overview
Part 2: File Services Enhancements

Historically, the optimal storage methodology for large-scale virtualization environments has been the Storage Area Network (SAN). Extremely small-scale environments were functional with internal direct access storage (DAS). Mid-size implementations have typically been forced into one of two less than optimal solutions: (1) An expensive SAN implementation, or (2) An inefficient host-based internal storage solution, sometimes even having to overprovision host servers just to get sufficient disk resources to support the number of virtual machines that the CPU and RAM could handle.

A Minimal SAN Implementation
Let’s take a look at a minimal, but functional, SAN implementation:
- 12-drive chassis with 12 2TB SATA-III drives
- Dual storage controllers
- Two 24-port 16Gb Fibre Channel switches
- Two 16Gb Fibre Channel host bus adapters (HBA) for each host
The above implementation is sufficient to support physical connections for about 20 host systems. Unfortunately, because those 20 host systems are likely virtualization hosts, the existing disk subsystem cannot possibly support the expected disk load from the 200+ virtual machines that could be hosted on those 20 servers.

A dozen drives will give us about 1600 IOPS of capacity, which is enough for 50-60 VDI VMs (which could actually reside on a single well-provisioned host). The number of server OS VMs would be lower, depending on the particular purpose of those servers. The minimal SAN configuration above would cost about $25,000 (USD) to start, plus about $3,000 per host for the HBAs. You’ll want at least two hosts for redundancy, so assume a minimum of $30,000 to start.

When sufficient virtual machines get added to the SAN and max out the capacity on the 12-drive array, you should plan on another $10,000 to add another dozen drives to the SAN, plus $3,000 per additional host for the HBAs.

Option Two: File Server Clusters
Let’s look at an alternative option, not involving a SAN. Let’s start with a two-node file server cluster running Windows Server 2012. With the new features described in the previous article, we can actually provide disk performance that matches that of the 12-drive 16Gb SAN described earlier, but for much less initial cost. Furthermore, the incremental cost of adding VM capacity also comes in much smaller chunks with this alternative option.
As a comparative example, an HP DL580 G4 with eight internal drives is a bit under $9000, and a pair of them is less than $18,000. Leaving two of those drives for the host OS, we have six drives available from each node for storage, so we have the same dozen drives available for data storage with the same I/O capacity.

Switches are now $1000, rather than the $7,000 per Fibre Channel switch. A pair of four-port Gigabit adapters is about $500/host. While only providing half the bandwidth of the 16Gb Fibre Channel SAN, it’s worth noting that bandwidth will rarely be the bottleneck considering how many $10,000 drive arrays it will take to saturate that 16Gb channel. As a comparison, the 50 VDI VMs at 30 IOPS per VM that will max out that 12-drive array will require less than 200Mb/sec total bandwidth assuming 128KB per I/O.

So, the initial investment for a two-node cluster comes in less than two-thirds of the cost of the SAN scenario,

Option Three: SAN Investment Cost with Triple the Performance
That’s a nice methodology if you plan to build incrementally, but there’s one last screaming-tech option to consider: a fully SSD-implemented file server. The EchoStreams FlacheSAN2 supports up to 48 SSDs on six 6Gb/sec SAS channels and with three 54Gb/sec InfiniBand interfaces has been demonstrated pushing 16GBytes/sec of sustained I/O across the wire. That’s eight to ten times the theoretical maximum capacity of a 16Gb/sec Fibre Channel SAN array — assuming you have enough drives to provide that much utilization. EchoStreams says it can do 20GB/sec and 2+ million IOPS on this machine!

This third option is also a great solution to build incrementally. A pair of these chassis provides capacity for up to 96 SSDs, and you can easily add them a dozen at a time. The implementation cost is comparable to the $25,000 of the entry-level SAN option, but the performance capacity is exponentially higher, and the incremental growth cost is just the cost of additional SSDs — which is almost surely to decrease as time goes on.

So, yes… In the old days a Storage Area Network was what you built for high-performance, ultra-reliable storage. Today, you have less expensive options with equivalent performance for your virtual environment or much higher performing options with the equivalent expense based on a technology you already know!

email

Written by , Posted .