At this point, probably everyone knows or has heard that Fibre Channel over Ethernet (FCoE) is a new protocol about to hit the market, but maybe you haven't had time to find out what it's all about yet.
The reason I talked about iSCSI and not IB is because most people tend to compare FCoE to iSCSI, although, FCoE's target market is not iSCSI but Fibre Channel. IB has always been faster than FC and Ethernet and it provides higher bandwidth and very low latency. Most likely this will continue in the forseeable future. One of IB's value propositions has been and is I/O consolidation, however, the value prop came with prohibitive costs and it's not just hardware but also services costs. IB has primarily found a home in the HPC market, although, even there, it comprises approximately 30% of the install base with the vast majority been Ethernet followed by FC. That said, I don't know if IB will make a dent in the commercial market in similar proportions to FC or Ethernet, but if history repeats itself (i.e HiPPI, FDDI) the answer is most likely no. That said from a maturity perspective it has a customer base, NOW, whereas FCoE has none. This certainly plays in IB's favor and I fully expect IB supporters to start making more and more noise hoping to win over some of the FC defectors.
I agree. Outside of HPC shops, the midsize enterprise market isn't even considering InfiniBand deployments.
Dispite what the IBTA adoption forecast states, IB (as well as Quadrics, Myrinet, etc.) is a mere blip on the radar. 99% of mainstream applications don't require ultra low-latency (less than 1 microsecond), not to mention the ultra-high price tag.
I suspect that very few (if any) FCP defectors will join the ranks of IB.
It's interesting that IB frequently comes up in discussions about the future of data center fabrics. It definitely does have a firm foothold in those environments where grids of compute nodes need (or want) the lowest possible message passing latency. But it is entirely a CPU-CPU network. Very few storage devices support a native IB connection (NetApp does have one available on special request). The main IB network providers, Voltaire, Cisco (Topspin), and Qlogic (Silverstorm?) all have gateways that allow an IB fabric to connect to an FC or Ethernet fabric, and the IB protocols support tunneling of storage protocols (SRP?) or Ethernet over IB. A full blown management plane for storage over IB never really emerged either, in part because of the complexities of end-end visibility through the gateways. So native IB storage, and therefore a full converged data center fabric based on IB are very unlikely to gain broad adoption.
IB is on a track to always be faster than Ethernet and will continue to be the fabric of choice for the most tightly coupled grids. It is a fair bet that the IB fabric companies will offer an FCoE port on the gateways and do the software work to allow either FC or Ethernet traffic to tunnel over IB. But it won't be a viable replacement for FC or Ethernet storage fabrics like FCoE will in the long run.
Thanks for taking the time out of your busy day to post to the communities. I've enjoyed recent Blog on the FCoE and I think you're right, FCoE is indeed set to grab the lion's market share of the FC attached applications. In fact, we've already fielded some customer requests for FCoE today.
For me its the traffic control aspects of FCoE that will be useful for us For now iSCSI and NFS will be more than ample for our infrastructure but we start pushing our 10GB connectivity (probably very unlikely unless the company grows hugely) then I would need to understand this technology more deeply.
Will be interesting to see how the new Cisco switches work in this technology arena, the new virtual versions of this switch should make for some interesting future decisions...
Today NetApp along with VMware and Cisco have announced a Joint Secure Multitenancy Architecture that combines leading technologies such as DataONTAP, Cisco UCS and VMware, and best practices to enable a fully Dynamic and Virtualized Dataventer.