Networking at NetApp: Choosing a Cluster Interconnect


By Frank Pleshe, Technical Marketing Engineer, and Philip Trautman, FAS Product Marketing, NetApp

Part 2 of a multi-part series on storage networking at NetApp

The first article in this series looked at the many contributions that NetApp has made to storage networking over the years. This time, we’ll focus on the network technology that is at the heart of NetApp scale-out storage – the cluster interconnect.

Many interconnect options exist, but NetApp selected – and has had great success with – a low-latency, non-blocking Ethernet infrastructure that delivers: 

  • Performance with a clear upgrade path
  • Flexibility and compatibility
  • Simplicity and low cost

Performance with a Clear Upgrade Path

In a FAS scale-out cluster, all controllers or “nodes” are joined in a redundant, switched 10GbE fabric. (A two-node configuration can operate in a simpler, switchless configuration.) When a node receives a request for data that resides on another node, that data must traverse the cluster interconnect to satisfy the request, so interconnect bandwidth and latency are critical to overall cluster performance.

Proven results

Frankly, some people doubted that Ethernet would be able to deliver the low latency required, but NetApp’s published performance results bear out how well it works in practice. In a 24-node FAS6240 SPECsfs result published in 2011, the test environment was completely unoptimized.

Test clients requested data from any cluster node, which meant that on average 23 out of 24 requests were for off-node data. Despite that, the test achieved an overall response time of 1.53 milliseconds with throughput of over 1.5 million SPECsfs2008_nfs.v3 ops/sec – those remain highly competitive numbers, even after three years. In fact, the behavior demonstrated is very much like that of a standalone storage controller and has much lower latency than other scale-out competitors.

Compelling roadmap

When greater bandwidth is needed, we have the option to simply add more 10GbE links. Today, FAS8020 nodes use two connections per node, FAS8040 and FAS8060 nodes use two or four connections. And, of course, Ethernet has a very compelling roadmap. 40 Gigabit Ethernet (40GbE) is starting to ship from network vendors now and will be widely available next year; 100 Gigabit Ethernet (100GbE) isn’t far behind.

Flexibility and Compatibility

Another important advantage of Ethernet as a cluster interconnect is flexibility. The vast majority of existing NetApp FAS systems in the field are cluster-capable. When the time is right, existing 7-Mode systems can undergo migration to become part of a scale-out cluster with the full benefits of clustered Data ONTAP including nondisruptive operations. The May issue of Tech OnTap will feature an article on 7-Mode to clustered Data ONTAP migration, so keep an eye out for it.

Eliminate forklift upgrades

Ethernet also gives us the flexibility to join together nodes of different capabilities and different generations. Most scale-out storage requires that nodes be very closely matched, limiting upgrade options. By mixing generations of FAS nodes, you can create a storage environment free from the necessity of periodic forklift upgrades. New generation nodes are added and older nodes (and media) retired as needed, all without disruption to data access.

Available everywhere

Ethernet is obviously well understood around the globe, and the Ethernet switches we use for the cluster interconnect are available everywhere, giving FAS scale-out clusters great supportability.

Simplicity and Cost

Because Ethernet is ubiquitous, using it as an interconnect requires no special drivers and no custom ASICs that add to system complexity and cost. Unlike other possible interconnects, Ethernet components are available from a variety of sources so we’re not locked into a single source. (As you might expect, single source components tend to be more expensive.) Our cluster interconnect uses the same network stack that we use for client and host connections – and that we’ve been optimizing for many years. This means it is fully proven, reliable, and supportable.


As a practical matter, it’s important to remember that in this instance Ethernet serves as a cluster interconnect – not a cluster network. Non-cluster traffic is not allowed and NetApp carefully specifies the supported switches to make sure they fulfill our requirements for standard, off-the-shelf components that are low-latency, non-blocking, well-supported and widely available.

Next time, we’ll dig a bit deeper into clustered Data ONTAP networking concepts. Leave us a comment if there’s a storage networking topic you’d like to hear more about.

on ‎2015-05-27 05:39 AM



thanks for your article.

Is there any recommendation about the distance between nodes in the cluster network ?

Is it possible to separate the nodes accros building (ethernet) or datacenter (fibre channel) ? One HA pair per building or per datacenter in different city.