Network Convergence: Deploying FCoE in Your Data Center
Network Convergence: Deploying FCoE in Your Data Center
2010-03-29 10:18 AM
This article continues a recent series of Tech OnTap features on FCoE. Previous articles include “FCoE: The Future of Fibre Channel?” by Nick Triantos, which appeared in November 2008, and “How FCoE and iSCSI Fit into Your Storage Strategy” by Mike McNamara and Silvano Gai, which appeared in June 2009. Refer to those articles for additional background information. [Tech OnTap eds.]
Many enterprise data centers use Ethernet networks for LAN and IP data traffic plus separate Fibre Channel (FC) networks for storage area network (SAN) traffic. The increased adoption of 10-Gigabit Ethernet (10GbE) in the data center, combined with the availability of Fibre Channel over Ethernet (FCoE) and new lossless 10GbE technologies, makes it possible to consolidate FC data flows with LAN and IP data traffic on the same Ethernet infrastructure. Network convergence enables you to preserve your existing investments in FC storage, reduce data center costs and complexity, and simplify network management.
Although the benefits of using FCoE are compelling, many are still waiting to deploy the technology. This article addresses frequently asked questions about the technology and concludes with information on how you can make the move to FCoE using a gradual, phased approach.
IT Challenges: Maintaining Multiple Networks
Most data centers maintain multiple networks for different purposes:
IP networks and FC SANs each play an essential role in the data center, but they differ in design and functionality. The two networks have their own security needs and traffic patterns, and use separate management toolsets. Each network is built and maintained on dedicated infrastructure, with separate cabling and separate network interfaces on each server and storage system.
Managing two discrete networks increases the complexity and cost of your data center. Converging your Ethernet and FC networks can make your data center more efficient without sacrificing your investment in FC infrastructure.
Fibre Channel over Ethernet
FCoE enables you to transmit IP and FC traffic on a single, unified Ethernet cable. In this way, the merged network can support LAN and SAN data types, reducing equipment and cabling in the data center while simultaneously lowering the power and cooling load associated with that equipment. There are also fewer support points when consolidating to a unified network, which helps reduce the management burden.
FCoE is enabled by an enhanced 10GbE technology commonly referred to as data center bridging (DCB) or Converged Enhanced Ethernet (CEE). Tunneling protocols, such as FCiP and iFCP, use IP to transmit FC traffic over long distances, but FCoE is a layer-2 encapsulation protocol that uses Ethernet physical transport to transmit FC data. Recent advances and upcoming additions to the Ethernet standard, such as TRILL (see sidebar) and the ability to provide lossless fabric characteristics over a 10-Gigabit link, are what enable FCoE.
FCoE delivers significant value to organizations that want to consolidate server I/O, network, and storage interconnects by converging onto a single network storage technology. For data centers with large investments, even the simplest reduction in the amount of equipment that has to be managed can reap significant benefits. And sharing the same network fabric—from server to switch to storage—removes the requirement of dedicated networks, significantly reducing TCO by preserving existing infrastructure investments and maintaining backward compatibility with familiar IT procedures and processes.
Some of the components needed to implement FCoE include:
Impact on Existing Servers, Networking, and Storage
FCoE requires minimal changes to your existing IT infrastructure. It is a natural evolution of Fibre Channel technology, designed to carry data over Ethernet physical and data-link layers. Using Fibre Channel’s upper layers simplifies FCoE deployment by allowing coexistence with deployed FC SANs and enables you to leverage enterprise-proven Fibre Channel software stacks, management tools, and existing training. Most importantly, you don’t need to change your applications in order to benefit from the performance and potential cost benefits of FCoE.
In traditional data center environments, the storage group owns and operates the FC SAN while the networking group owns and operates the Ethernet LAN. Since the two groups have been historically separate, introducing FCoE into the data center may introduce beneficial changes to some IT practices.
Cultural, political, and behavioral concerns in data center and provisioning paradigms can present organizational obstacles to FCoE adoption. Some new business processes and procedures may need to be implemented so that proper control mechanisms are in place for FCoE networks. Purchasing patterns may have to be modified and the reliability of Ethernet networks may have to be increased as well.
With the convergence of FC and IP created by FCoE, these two traditionally separate network realms overlap. Implementing FCoE requires little if any additional IT training. FCoE leverages the existing IT expertise and skill sets of your IP data and FC teams. Role-based management features in management applications allow your FC group to continue owning and operating the SAN and your IP networking group to continue owning and operating the data network.
Where to Deploy
While the benefits of using FCoE are certainly compelling, you may still be waiting to deploy the technology. Fortunately, FCoE convergence is not a disruptive process and does not require a “rip and replace” upgrade. Moving to FCoE can be done gradually, using a phased approach. Most early FCoE deployments will likely be part of new server deployments in Windows® and Linux® environments in which virtualized tier-3 and some tier-2 applications are deployed.
Considering that FCoE is a relatively new technology, initial FCoE deployment is best suited for access-layer server I/O consolidation. As storage traffic requires the new lossless Ethernet, the 10GbE transport still requires Link Layer multipathing and multihop capabilities. Such features are currently under development, and should become available later in 2010. These capabilities will enable the deployment of larger FCoE networks, which will expand the reach of FCoE beyond access layer server connectivity and I/O consolidation.
Best practices for determining where to deploy FCoE include:
How to Begin
Migration to FCoE can be accomplished with a gradual, phased approach, typically starting at the edge or switch, then moving to native FCoE storage, and eventually going deeper into the corporate network.
The following diagram depicts a typical data center architecture before network convergence begins. The FC SAN (illustrated by the orange line) is a parallel network requiring network ports and cabling over and above those required for the Ethernet IP LAN (illustrated by the blue line):
Figure 1)Layout of a typical data center before implementing DCB/FCoE.
Phase 1: Making the Transition to DCB/FCoE at the Edge or Switch
Moving to a converged or unified Ethernet infrastructure can be done gradually and will likely begin at the edge (illustrated by the green lines) where the greatest return on investment can be realized. With FCoE convergence, port count at the servers and edge switches can be reduced by half, driving significant capital and operational cost reductions as well as management improvements.
Figure 2)Phase 1: Making the transition to FCoE at the edge or switch.
Phase 2: Making the Transition to Native DCB/FCoE Storage Systems
Move to an end-to-end DCB/FCoE solution from the host to the network to native DCB/FCoE storage. The typical configuration has rack servers with CNAs connected to top-of-rack DCB/FCoE switches connected to unified storage that supports FCoE as well as other protocols. FCoE and converged traffic is supported throughout the infrastructure, providing optimal savings.
Figure 3)Phase 2: Making the transition to native FCoE storage.
Phase 3: Making the Transition to DCB/FCoE at the Core
After implementing FCoE at the edge or switch, enterprises can migrate to a comprehensive 10GbE-enhanced Ethernet network at the core (illustrated by the green lines) and then gradually move to storage that supports FCoE as well. The end goal is a 10GbE Ethernet infrastructure that supports multiple traffic types (FCoE, iSCSI, NFS, CIFS) from host to fabric to storage sharing the same Ethernet infrastructure.
Figure 4)Phase 3: End-to-end FCoE, from edge to core to storage.
FCoE brings together two leading technologies—the Fibre Channel protocol and an enhanced 10-Gigabit Ethernet physical transport—to provide a compelling option for SAN connectivity and networking. To simplify administration and protect FC SAN investments, FCoE enables you to use the same management tools and techniques you use today for managing both your IP and FC storage networks.
The benefits of converged networks will drive increased adoption of 10GbE in the data center. FCoE will fuel a new wave of data center consolidation as it lowers complexity, increases efficiency, improves utilization, and, ultimately, reduces power, space, and cooling requirements.
If you are planning new data centers or are upgrading your storage networks, you should seriously consider FCoE. By taking a phased approach to consolidating your data centers around Ethernet, you can build out your Ethernet infrastructure over time while protecting existing FC infrastructure investments.
Got opinions about FCoE?
Ask questions, exchange ideas, and share your thoughts online in NetApp Communities.
This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.
- Software files (compressed or uncompressed)
- Files that require an End User License Agreement (EULA)
- Confidential information
- Personal data you do not want publicly available
- Another’s personally identifiable information
- Copyrighted materials without the permission of the copyright owner