Simply put, RDMA allows a device to see use the network pipe fully.
This is done in a few ways;
This concept and these methods are not new, consider the following quote;
"RDMA implements a Transport Protocol in the NIC hardware and supports Zero-Copy Networking, which makes it possible to read data directly
from the main memory of one computer and write that data directly to the main memory of another computer. RDMA has proven useful in apps..."
-This quote was written in 2005 (http://searchstorage.techtarget.com/definition/Remote-Direct-Memory-Access), and repeated every tech session since as if its a revelation.
And the problem is that the miss-information prevails due to RDMA vendors that sell hardware have a vested interest in their own bet, and take shots at eachother publically;
One of the few voices out there that we would think is independant would be Microsoft, since they support all three, but
consider that Microsoft has a few different positions here as well. The developer side of Microsoft wants to support all three with the least common
denominator of features, in which case they write to iWarp. The Production and Operations side of the house however wants some of those advanced features like lossless and PFC, while the group that builds impressive demos and marketing material love to tout the maximum performance without regard to actual datacenter needs.
Lets let you see some realistic numbers. These are averages from CDW and NewEgg
|Type||Protocol||Speeds||Switch Requirements||Routable||Cost (NIC + Switch) per port|
|RDMA||iWARP||10g||No Requirements, but will work on DCB||Yes||$900 (+$275 )|
|10g E||10g||No Requirements, but will work on DCB||Yes||$400 (+$275)|
|FC||8g-->16g||Fibre Channel||Yes||$800-->$1600 (+$500)|
You probably also have a few requirements when it comes to your servers, the first of which is reliability. You likely require that you have multiple connections
from the Server to your production network, multiple connections to your storage, and multiple connections to your peer servers.
You may find the difference in performance between the two above sample designs can favor the simpler design by simply using the next faster step of processor, or slightly more system memory. People generally underestimate what a well designed 10g Ethernet on DCB switching can do.
Do you want to deploy Converged Ethernet switching knowing that it will support both iWARP or RoCE, or purchase non-CE ethernet switches as well as Infiniband switches to support. Are you currently depoying iSCSI using software based initiators, you may find a significant CPU reduction by moving to a FCoE type connection since all of that work is then moved off to the FCoE card.
I really want to hear your experiences with RDMA, have you had good/neutral/bad expereiences with them in a production enviornment.