The challenge for data center operators selecting a high performance transport technology for their network is striking the ideal balance between acquisition, deployment and management costs, and support for high performance capabilities such as the remote direct memory access (RDMA) protocol.
The iWARP protocol is the open Internet Engineering Task Force (IETF) standard for RDMA over Ethernet, and offers an interchangeable, RDMA verbs-compliant alternative to specialized fabrics such as InfiniBand. iWARP adapters are fully supported within the Open Fabrics Alliance Enterprise software distribution (OFED), with typically no changes needed for applications to migrate from InfiniBand to Ethernet.
With the recent approval of RFC 7306 by the IETF, the iWARP protocol gains a number of features that eliminate differences with the latest InfiniBand capabilities to further enhance the portability of RDMA applications.
The iWARP Open Standard
The iWARP protocol stack layers RDMA transport functionality on top of TCP/IP, leveraging this ubiquitous stack's reach, robustness and reliability. iWARP traffic is identical to other TCP/IP applications and requires no special support from switches and routers, or changes to network devices. Thanks to hardware offloaded TCP/IP, iWARP RDMA NICs offer high performance and low latency RDMA functionality, and native integration within today's large Ethernet-based networks and clouds.
Initially aimed at high performance computing applications, iWARP is now finding a home in data centers thanks to its availability on high-performance 40G Ethernet NICs and increased data center demand for low latency, high bandwidth, and low server CPU utilization. It has also been integrated into server operating systems such as Microsoft Windows Server 2012 with SMB Direct, which can seamlessly take advantage of iWARP RDMA without user intervention.
RDMA over Ethernet
RDMA has traditionally required deploying a specialized fabric, with associated acquisition and maintenance expenses, as it typically must coexist alongside an Ethernet network in a data center. While InfiniBand is a well-known RDMA interconnect, its performance advantages have traditionally stemmed from advanced physical layers that kept it ahead of Ethernet. With the convergence in high-speed serial link designs across technologies, Ethernet has bridged this gap and now follows an identical speed curve.
Today, 40Gbps Ethernet and FDR IB offer practically the same application level performance, while 100Gbps Ethernet and EDR IB are appearing on the market at the same time. These advances have made Ethernet a serious contender as an RDMA fabric.
An InfiniBand-based RDMA over Ethernet contender (RDMA over Converged Ethernet or RoCE) has recently been released by the InfiniBand Trade Association (IBTA).
RoCE replaces the physical and MAC layers of IB with the Ethernet equivalents. However, due to the absence of reliability layers, it requires "lossless" network operation — i.e. data center bridging equipment and PAUSE throughout the network. Today's RoCE implementations are not routable, limiting an implementation to a one-hop Ethernet subnet, e.g. a rack.
Sign up for Computerworld eNewsletters.