Storage predictions for 2015: The future looks bright!
In 2015, infrastructure convergence will become the standard and the new norm in data centers. IT vendors have been hyping "Hyper-converged" solutions as the focus of their value proposition for reducing costs and simplifying IT roll-outs as they provide you with modular building blocks for the new data center. The enabling technology for these new converged solutions is virtualization, which leads to intelligent abstraction. Server virtualization is now the new norm, and has become a factor in driving converged infrastructure for both networks and storage.
Virtual servers are killing off traditional SAN storage. Blade servers and virtual servers need to share CPU hardware and storage interconnects as a way to make them cost effective. The sharing of these interconnects is forcing the storage to move closer to, or even back into the servers. The fact that virtual servers share hardware for compute and memory resources, including input/output connectivity for both network and storage is increasing the need for improved latency and connectivity, along with higher bandwidth for those shared resources.
As the adoption of modular building blocks for the converged data center continue, the traditional host bus adapter (HBA), which is currently used for fibre channel based block I/O in a SAN will be replaced with either HCA's (Host Channel Adapters) or CNA's (Converged Network Adapters). These adapters enable multiple protocols over a single link, and the fact that multiple links can be combined together will mean some astounding performance benefits for storage in 2015.
Just like Fibre Channel (FC), Infiniband (IB) has been around for years, but the adoption of Infiniband has stayed mainly within the high performance computing (HPC), scientific, grid-computing, and multi-node high performance cluster spaces where the database and big data are king. IBM has been using Infiniband as the internal interconnect in their large P-series servers for quite some time now, and Oracle and others have now adopted Infiniband as the interconnect within their high-end converged solutions.
Most blade server vendors also use either IB or backplane crossbar switches as the internal interconnect in their blade systems. According to the Infiniband Trade Association (IBTA), using 12x enhanced data rate (EDR) links, speeds of 300Gb per second or more will be available in the 2015 timeframe.
- SDR - Single Data Rate
- DDR- Double Data Rate
- QDR - Quad Data Rate
- FDR - Fourteen Data Rate
- EDR - Enhanced Data Rate
- HDR - High Data Rate
- NDR - Next Data Rate
In 2015, I foresee the next gen Ethernet and Infiniband network adapters making further inroads as the topology of choice for storage I/O in 2015. Having fast SSD storage is silly if the interconnect between the devices is slower than the storage itself. As an example, the current 10Gb fast Ethernet can only move about 1.25GB/s (large B for Bytes) per second, which translates to only about 4.5TB/h (Terabytes per hour}. As a rule of thumb, you can average about 100 MB/s for every 1Gb/s of bandwidth (assuming 8 bits per byte and typical network hops and issues) Heck, my laptop's solid state disk (SSD) can do sustained reads at over 500MB/s, which would saturate about half of the 10Gb network just to back up that one drive.
Sign up for Computerworld eNewsletters.