FC vs. FCoE Lab Performance Comparison

Analyst Evaluator Group recently completed testing of storage networking connectivity between blade servers and solid-state storage, evaluating Fibre Channel (FC) versus Fibre Channel over Ethernet (FCoE) and published a report.   The report was funded by Brocade, a long time FC vendor. All testing occurred at Evaluator Group labs using a combination of Evaluator Group and Brocade equipment to perform the testing.


The testing focused primarily on network performance and its impact with solid-state storage environments.  The goal was to understand the impact of storage connectivity on high-performance, enterprise applications as customers adopt solid-state storage, particularly in virtual server environments.

The tested configuration showed the following interesting results which I’m sure surprised a lot of people:

  • FC provided lower response times as workloads surpassed 80% SAN utilization
    • FC response times were one-half to one-tenth of FCoE response times
  • FC provided higher performance with fewer connections than FCoE
    • Measured FC response times were lower, using 50% fewer connections than FCoE
    • Lower variation in FC results provided more predictable response times
  • FC used 20% to 30% less CPU than FCoE
    • CPU utilization was lower using FC than FCoE
  • A single 16Gbps FC connection outperformed two 10Gbps FCoE connections as measured by application latency.


The full Evaluator Group report is located here.


Anyone surprised by the results?


Mike McNamara

on ‎2014-02-12 05:04 PM

I think my favorite part about this was claiming FCoE required more cables and power, when the FC solution had zero Ethernet connectivity, and the UCS solution had full Ethernet and FC (they didn't even need the Brocade switch, they could have plugged directly into the storage array, either native FC or FCoE).

Or where they blame the additional latency and CPU utilization on software initiators, when they didn't use any (the VIC is not a software card). They just didn't... know how FCoE worked, apparently.

on ‎2014-03-03 05:12 AM

I keep seeing NetApp retweet this periodically, so it's worth writing this comment even after several weeks that have gone by.

Read the report. If at first things seem odd or strange, it is because they are.

Tony (first commenter) is too modest to write about it, but he wrote a fantastic blog article tearing this "study" apart, as did Dave Alexander. I followed up with my own attempt to explain why I thought this was an embarrassment for anyone who actually promotes an analyst's report by only reading its title. 

The reality is that - regardless of the companies or technology involved (disclosure: I work for Cisco) this is a classic example of why customers have every right to distrust vendors' pay-for-play reports.

on ‎2014-03-17 01:10 PM

I have not clue whether or not the stuff about power or cpu is true or not, but in our environment we have experimented with moving VM's from our FCoE edge to FC and the results a 50% decrease in latency and an end to LUNs that suddenly have dead paths and lastly better throughput (primarily because it's now on dedicated instead of on a shared infrastructure).

on ‎2014-03-17 01:17 PM

Grrrr, don't you just hate it when you copy-paste-submit the wrong stuff :-)  I have no clue whether or not the stuff about power or CPU is true or not. But in our environment we have experimented with moving VM's from FCoE to FC and the results were significant. A 30-50% decrease in latency, better throughput (probably because it's now on dedicated rather than shared infrastructure) and lastly an end to LUNs that suddenly having dead paths. I don’t care what all kinds experts say, my conclusion is clear FC is faster and more reliable than FCoE. The only FCoE implementation I would feel comfortable with would be dedicated SAN and then what would be the point of FCoE?