Tech ONTAP Articles

Unified Connect

Tech_OnTap
8,059 Views

Enterprise data centers typically use Ethernet networks for LAN and IP data traffic and have separate Fibre Channel (FC) networks for storage area network (SAN) traffic. In addition, data centers also often have additional, specialized cluster interconnects such as InfiniBand. The increased adoption of 10 Gigabit Ethernet (10GbE) in the data center, combined with the availability of Fibre Channel over Ethernet (FCoE) and lossless 10GbE technologies, makes it possible to consolidate Fibre Channel traffic with LAN and IP data traffic on the same Ethernet infrastructure.

Network convergence promises to enable you to preserve your existing investments in FC storage, reduce data center costs and complexity, and simplify network management. Because of the clear potential of network convergence to simplify the data center and the high interest in FCoE technology, Tech OnTap has included many articles on the subject in recent years. (See sidebar.)

With the introduction of Data ONTAP® 8.0.1, NetApp has completed the final step to enable full network convergence. Our new Unified Connect technology makes it possible to run all your storage protocols across a single wire from servers to switches to storage. This article describes Unified Connect, including what it is, how it works, performance considerations, and best practices.

Figure 1)
The traditional approach versus Unified Connect. Unified Connect simplifies network infrastructure and frees up ports and slots on servers, switches, and storage.

What Is Unified Connect?

Unified Connect is a new software feature that was introduced in Data ONTAP 8.0.1 along with a variety of other important enhancements such as:

  • Data compression
  • 64-bit aggregates
  • DataMotion for Volumes
  • Support for new hardware

You can read more about all the capabilities of the latest release of Data ONTAP 8 in a recent Tech OnTap® article.

NetApp started shipping end-to-end FCoE over 1.5 years ago. This allowed older FC infrastructure to be replaced by 10GbE infrastructure, but still required separate connections for block data and file data. The release of Unified Connect eliminates this last barrier to full convergence, allowing all IP and FCoE network traffic to and from a storage system to share a single wire.

Network convergence with FCoE and Unified Connect offers a number of advantages, including:

  • Up to a 70% reduction in cabling
  • Reduction of storage system port requirements from 12 to 4 for a typical configuration, freeing up ports and/or PCIe slots for other purposes
  • Ability to consolidate multiple GbE and 2/4G FC connections onto a single 10GbE wire
  • Improved bandwidth utilization since multiple types of data traffic share the same wire

Table 1)
Impact of Unified Connect on a typical storage system configuration.

The elimination of redundant equipment needed to maintain separate, discrete IP and FC networks:

  • Reduces cooling and power costs while freeing up valuable real estate in the data center
  • Eliminates cabling complexity
  • Simplifies management
  • Reduces data center costs

NetApp is the only storage provider with FCoE and IP protocol support over the same wire, and Unified Connect is supported on existing hardware platforms. Unified Connect is a software update made possible by Data ONTAP 8.0.1. If you’ve already purchased unified target adapters (UTAs), no hardware changes or upgrades are required; you only need to upgrade your software to get the Unified Connect capability.

Unified Connect Performance

How Unified Connect actually performs with multiple protocols running simultaneously on a single wire is naturally a concern for anyone considering adopting the technology. NetApp recently undertook a series of tests in conjunction with Intel to verify the performance of Unified Connect and other features in an environment using a NetApp® FAS6280 storage system with UTAs installed, a Cisco Nexus® 5020 switch, and Intel® Xeon® Servers with Intel X520 series 10GbE adapters.

IOmeter was used to run a variety of I/O tests with FCoE traffic accessing a LUN and CIFS traffic accessing a mapped drive simultaneously. Both used the same network interface on the server and target adapter on the FAS6280. Per best practices, the mapped drive and LUN were on separate flexible volumes on the storage system.

Both protocols were constrained by a class of service (CoS) dedicating 80% of the 10GbE line capacity to FC traffic and 20% to Ethernet traffic. Even with a block and file storage protocol accessing the same wire, the FAS6280 and X520 Ethernet adapters had no difficulty maintaining the same line rate as for single-protocol tests. Networking performance was unchanged, with the CoS maintaining the 80/20 ratio of block-to-file traffic.

A typical result is shown in Figure 2. For details and more performance results, see the complete study.

Figure 2)
Unified Connect performance. IOmeter was used to generate simultaneous FCoE and CIFS traffic across a single wire. COS was used to dedicate 80% of available bandwidth to FCoE and 20% to CIFS.

How Unified Connect Works

FCoE, converged Ethernet, and Unified Connect are all enabled by Data Center Bridging (DCB) enhancements made to the Ethernet protocol. DCB enhancements include bandwidth allocation and flow control based on traffic classification and end-to-end congestion notification. Discovery and configuration of DCB capabilities are performed using Data Center Bridging Exchange (DCBX) over LLDP.

Bandwidth allocation on Ethernet with DCB is performed with enhanced transmission selection (ETS), which is defined in the IEEE 802.1Qaz standard. Traffic is classified into one of eight groups (0-7) using a field in the Ethernet frame header. Each class is assigned a minimum available bandwidth. If there is competition or oversubscription on a link, each traffic class will get at least its configured amount of bandwidth. If there is no contention on the link, any class can use more or less than it is assigned.

The first implementation of ETS within Unified Connect uses only two classifications. One supports FCoE on one priority queue, while all IP traffic is on another priority queue. The next generation will provide greater granularity with up to 8 priority queues.

Priority-based flow control (PFC) provides link-level flow control that operates on a per-priority basis. It is similar to 802.3x PAUSE, except that it can pause an individual traffic class. This provides a network with no loss due to congestion for those traffic classes that use PFC. Not all traffic needs PFC. Normal TCP traffic provides its own flow control mechanisms based on window sizes. Because the Fibre Channel protocol expects a lossless medium, FCoE has no built-in flow control and requires PFC to give it a lossless link layer. PFC is defined in the 802.1Qbb standard.

ETS and PFC values are generally configured on the DCB-capable switch and pushed out to the end nodes. For ETS, the sending port controls the bandwidth allocation for that segment of the link (initiator to switch, switch to switch, or switch to target). With PFC, the receiving port sends the per-priority pause, and the sending port reacts by not sending traffic for that traffic class out of the port that received the pause.

Best Practices

You can learn more about the best way to go about introducing FCoE in your data center in a previous Tech OnTap article.

For any FCoE deployment, you should follow the guidelines outlined in TR-3800: Fibre Channel over Ethernet (FCoE) End-to-End Deployment Guide and TR-3802: Ethernet Storage Best Practices.

A few additional best practices must be implemented in order to make sure of a successful Unified Connect deployment:

  • Evaluate the bandwidth needs of all traffic sharing the converged network to determine how much is needed for FCoE traffic and how much will be needed for other types of Ethernet traffic.
  • Configure ETS and PFC settings on the switches so that all nodes share the same configuration.
  • When connecting multiple DCB-capable switches, configure all switches with the same DCB settings.
  • Set ETS bandwidth allocation for FCoE to accommodate the minimum acceptable throughput for all SAN traffic that will utilize a link. For example, if 10 hosts connect to a single FCoE switch and storage system from that switch, determine the minimum acceptable combined throughput of all 10 hosts. This will be your ETS setting for the FCoE traffic class. Because the ETS allocation value only sets the minimum available bandwidth, if more throughput is needed during spikes of traffic, it can be used as long as it is available. Likewise, if FCoE traffic is not utilizing the amount allocated, other Ethernet traffic can take advantage of the remainder.
  • You must configure a dedicated VLAN for each VSAN within the FCoE-capable switch.
  • A separate multiple spanning tree (MST) instance should be configured for each VSAN.
  • Unified ports must be configured as IEEE 802.1Q interfaces on the DCB-capable switch.
  • Data ONTAP 8.0.1 does not currently support interface bundling, also known as IFGRP, on ports used for FCoE.

The Expanding Converged Network Ecosystem

With the availability of Unified Connect, NetApp has delivered on the full promise of FCoE for data center deployments. Deploying end-to-end FCoE on converged networks promises to eliminate cabling complexity, simplify management, and significantly reduce overall data center expenses without sacrificing performance.

An expanding ecosystem of solution and service providers exists that can help to further make sure of your success. Most of the major host operating systems already provide FCoE support or will in the near future. Intel, Broadcom, Emulex, QLogic, Brocade, and Cisco have adapters that support FCoE, and Cisco and Brocade have switches that support FCoE and DCB.

In addition, new standards are emerging to strengthen the ecosystem further. The Open-FCoE standard provides an open source FCoE software stack for Linux®. Open-FCoE includes a software initiator that will allow FCoE to be used with a wider variety of network interfaces. Intel and Broadcom have already announced support for this standard. The FC-BB-6 standard is being developed to expand the number of configurations supported under FCoE, including configurations that allow standard Ethernet switches (without DCB) to be part of the topology.

Figure 3)
The expanding NetApp FCoE ecosystem

Got opinions about Unified Connect?
 
Ask questions, exchange ideas, and share your thoughts online in NetApp Communities.

Jason Blosil
Product Marketing Manager
NetApp

Jason has over 15 years of IT industry experience, including 10+ years in the data storage industry managing and marketing server-based RAID storage products and external storage systems. He currently specializes in Ethernet SAN (iSCSI and FCoE) storage solutions at NetApp and is an active participant in industry associations, including the Ethernet Alliance and the SNIA ESF, where he acts as the cochair of the iSCSI SIG.


Mike McNamara
Senior Manager, Product Marketing
NetApp

Mike has over 22 years of computer industry marketing experience, 16 years of which have been specifically focused on storage. He worked at Adaptec, EMC, and Hewlett Packard before joining NetApp more than five years ago. Mike is also the marketing chairperson for the Fibre Channel Industry Association (FCIA) and a member of the Ethernet Alliance.

Explore

Please Note:

All content posted on the NetApp Community is publicly searchable and viewable. Participation in the NetApp Community is voluntary.

In accordance with our Code of Conduct and Community Terms of Use, DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information (PII)
  • Copyrighted materials without the permission of the copyright owner

Continued non-compliance may result in NetApp Community account restrictions or termination.

Public