Tech ONTAP Articles

Network Convergence: Deploying FCoE in Your Data Center

Tech_OnTap
3,285 Views

This article continues a recent series of Tech OnTap features on FCoE. Previous articles include “FCoE: The Future of Fibre Channel?” by Nick Triantos, which appeared in November 2008, and “How FCoE and iSCSI Fit into Your Storage Strategy” by Mike McNamara and Silvano Gai, which appeared in June 2009. Refer to those articles for additional background information. [Tech OnTap eds.]

Many enterprise data centers use Ethernet networks for LAN and IP data traffic plus separate Fibre Channel (FC) networks for storage area network (SAN) traffic. The increased adoption of 10-Gigabit Ethernet (10GbE) in the data center, combined with the availability of Fibre Channel over Ethernet (FCoE) and new lossless 10GbE technologies, makes it possible to consolidate FC data flows with LAN and IP data traffic on the same Ethernet infrastructure. Network convergence enables you to preserve your existing investments in FC storage, reduce data center costs and complexity, and simplify network management.

Although the benefits of using FCoE are compelling, many are still waiting to deploy the technology. This article addresses frequently asked questions about the technology and concludes with information on how you can make the move to FCoE using a gradual, phased approach.

IT Challenges: Maintaining Multiple Networks

Most data centers maintain multiple networks for different purposes:

  • Ethernet for local area networks (LANs) to transfer small amounts of information across short or long distances or in clustered computing environments. Ethernet provides a cost-effective and efficient way to support a variety of data types, including corporate LANs, voice-over-IP telephony, and storage with NFS, CIFS, and iSCSI.
  • Fibre Channel for storage area networks (SANs) to provide block I/O for applications such as network booting; mail servers; and large, data-intensive databases. FC SANs are an excellent solution for storage consolidation, centralized storage management, high performance, reliability, and business continuance.

IP networks and FC SANs each play an essential role in the data center, but they differ in design and functionality. The two networks have their own security needs and traffic patterns, and use separate management toolsets. Each network is built and maintained on dedicated infrastructure, with separate cabling and separate network interfaces on each server and storage system.

Managing two discrete networks increases the complexity and cost of your data center. Converging your Ethernet and FC networks can make your data center more efficient without sacrificing your investment in FC infrastructure.

Fibre Channel over Ethernet

FCoE enables you to transmit IP and FC traffic on a single, unified Ethernet cable. In this way, the merged network can support LAN and SAN data types, reducing equipment and cabling in the data center while simultaneously lowering the power and cooling load associated with that equipment. There are also fewer support points when consolidating to a unified network, which helps reduce the management burden.

FCoE is enabled by an enhanced 10GbE technology commonly referred to as data center bridging (DCB) or Converged Enhanced Ethernet (CEE). Tunneling protocols, such as FCiP and iFCP, use IP to transmit FC traffic over long distances, but FCoE is a layer-2 encapsulation protocol that uses Ethernet physical transport to transmit FC data. Recent advances and upcoming additions to the Ethernet standard, such as TRILL (see sidebar) and the ability to provide lossless fabric characteristics over a 10-Gigabit link, are what enable FCoE.

FCoE delivers significant value to organizations that want to consolidate server I/O, network, and storage interconnects by converging onto a single network storage technology. For data centers with large investments, even the simplest reduction in the amount of equipment that has to be managed can reap significant benefits. And sharing the same network fabric—from server to switch to storage—removes the requirement of dedicated networks, significantly reducing TCO by preserving existing infrastructure investments and maintaining backward compatibility with familiar IT procedures and processes.

FCoE Components

Some of the components needed to implement FCoE include:

  • Converged network adapters (CNAs). These combine the functionality of Ethernet NICs and Fibre Channel host bus adapters (HBAs), reducing the number of server adapters you need to buy, cutting port count, and eliminating a healthy number of cables.
  • FCoE cables. There are currently two options for FCoE cables: the optical cabling generally found in FC SANs and a new type of Twinax copper cabling. FCoE twin cables require less power and are less expensive, but, because their length is limited to less than 10 meters, you will likely need optical cabling to reach from top-of-rack switches to the LAN.
  • FCoE switches. You need FCoE/DCB switches to connect servers to your storage arrays or native FCoE storage systems. For the early adopters, that means top-of-rack switches or end-of-row blades where possible.
  • FCoE/DCB storage systems. These storage systems natively support FCoE and converged traffic. There are also storage systems that support Fibre Channel to an FCoE switch and FCoE from the switch to host servers.

Impact on Existing Servers, Networking, and Storage

FCoE requires minimal changes to your existing IT infrastructure. It is a natural evolution of Fibre Channel technology, designed to carry data over Ethernet physical and data-link layers. Using Fibre Channel’s upper layers simplifies FCoE deployment by allowing coexistence with deployed FC SANs and enables you to leverage enterprise-proven Fibre Channel software stacks, management tools, and existing training. Most importantly, you don’t need to change your applications in order to benefit from the performance and potential cost benefits of FCoE.

Organizational Issues

In traditional data center environments, the storage group owns and operates the FC SAN while the networking group owns and operates the Ethernet LAN. Since the two groups have been historically separate, introducing FCoE into the data center may introduce beneficial changes to some IT practices.

Cultural, political, and behavioral concerns in data center and provisioning paradigms can present organizational obstacles to FCoE adoption. Some new business processes and procedures may need to be implemented so that proper control mechanisms are in place for FCoE networks. Purchasing patterns may have to be modified and the reliability of Ethernet networks may have to be increased as well.

With the convergence of FC and IP created by FCoE, these two traditionally separate network realms overlap. Implementing FCoE requires little if any additional IT training. FCoE leverages the existing IT expertise and skill sets of your IP data and FC teams. Role-based management features in management applications allow your FC group to continue owning and operating the SAN and your IP networking group to continue owning and operating the data network.

Where to Deploy

While the benefits of using FCoE are certainly compelling, you may still be waiting to deploy the technology. Fortunately, FCoE convergence is not a disruptive process and does not require a “rip and replace” upgrade. Moving to FCoE can be done gradually, using a phased approach. Most early FCoE deployments will likely be part of new server deployments in Windows® and Linux® environments in which virtualized tier-3 and some tier-2 applications are deployed.

Considering that FCoE is a relatively new technology, initial FCoE deployment is best suited for access-layer server I/O consolidation. As storage traffic requires the new lossless Ethernet, the 10GbE transport still requires Link Layer multipathing and multihop capabilities. Such features are currently under development, and should become available later in 2010. These capabilities will enable the deployment of larger FCoE networks, which will expand the reach of FCoE beyond access layer server connectivity and I/O consolidation.

Best practices for determining where to deploy FCoE include:

  • Choose environments that already have a Fibre Channel skill base and Fibre Channel infrastructure
  • “Green-field” deployments, in which new infrastructure is being introduced to accommodate data growth
  • Consider beginning the transition to FCoE in your tier-3 or tier-2 applications; gain experience in labs or less mission-critical tier-3 environments and then use what you’ve learned to make the transition in tier-2 and, in some instances, tier-1 applications
  • Start implementing FCoE on the top-of-rack access layer server I/O consolidation side—that step may be combined with native FCoE storage deployment; extending FCoE beyond access layer servers should wait for multipathing and multihop standards (TRILL) to become practical

How to Begin

Migration to FCoE can be accomplished with a gradual, phased approach, typically starting at the edge or switch, then moving to native FCoE storage, and eventually going deeper into the corporate network.

The following diagram depicts a typical data center architecture before network convergence begins. The FC SAN (illustrated by the orange line) is a parallel network requiring network ports and cabling over and above those required for the Ethernet IP LAN (illustrated by the blue line):

Layout of a typical data center before implementing DCB/FCoE.

Figure 1)Layout of a typical data center before implementing DCB/FCoE.

Phase 1: Making the Transition to DCB/FCoE at the Edge or Switch

Moving to a converged or unified Ethernet infrastructure can be done gradually and will likely begin at the edge (illustrated by the green lines) where the greatest return on investment can be realized. With FCoE convergence, port count at the servers and edge switches can be reduced by half, driving significant capital and operational cost reductions as well as management improvements.

Phase 1: Making the transition to FCoE at the edge or switch.

Figure 2)Phase 1: Making the transition to FCoE at the edge or switch.

Phase 2: Making the Transition to Native DCB/FCoE Storage Systems

Move to an end-to-end DCB/FCoE solution from the host to the network to native DCB/FCoE storage. The typical configuration has rack servers with CNAs connected to top-of-rack DCB/FCoE switches connected to unified storage that supports FCoE as well as other protocols. FCoE and converged traffic is supported throughout the infrastructure, providing optimal savings.

Phase 2: Making the transition to native FCoE storage.

Figure 3)Phase 2: Making the transition to native FCoE storage.

Phase 3: Making the Transition to DCB/FCoE at the Core

After implementing FCoE at the edge or switch, enterprises can migrate to a comprehensive 10GbE-enhanced Ethernet network at the core (illustrated by the green lines) and then gradually move to storage that supports FCoE as well. The end goal is a 10GbE Ethernet infrastructure that supports multiple traffic types (FCoE, iSCSI, NFS, CIFS) from host to fabric to storage sharing the same Ethernet infrastructure.

Phase 3: End-to-end FCoE, from edge to core to storage.

Figure 4)Phase 3: End-to-end FCoE, from edge to core to storage.

Conclusion

FCoE brings together two leading technologies—the Fibre Channel protocol and an enhanced 10-Gigabit Ethernet physical transport—to provide a compelling option for SAN connectivity and networking. To simplify administration and protect FC SAN investments, FCoE enables you to use the same management tools and techniques you use today for managing both your IP and FC storage networks.

The benefits of converged networks will drive increased adoption of 10GbE in the data center. FCoE will fuel a new wave of data center consolidation as it lowers complexity, increases efficiency, improves utilization, and, ultimately, reduces power, space, and cooling requirements.

If you are planning new data centers or are upgrading your storage networks, you should seriously consider FCoE. By taking a phased approach to consolidating your data centers around Ethernet, you can build out your Ethernet infrastructure over time while protecting existing FC infrastructure investments.

Got opinions about FCoE?

Ask questions, exchange ideas, and share your thoughts online in NetApp Communities.
Mike McNamara

Mike McNamara
Sr. Manager, Product Marketing
NetApp

Mike has over 20 years of computer industry marketing experience, 15 years of which have been specifically focused on storage. He worked at Adaptec, EMC, and Hewlett Packard before joining NetApp more than four years ago. Mike is also the marketing chairperson for the Fibre Channel Industry Association (FCIA).

Ahmad Zamer

Ahmad Zamer
Sr. Product Marketing Manager
Brocade

Ahmad has over 25 years of computer-industry experience, with special emphasis on networking and computer storage technologies. He worked at Philips and Intel before joining Brocade. Ahmad is a technical writer with more than 50 published articles to his credit.

Explore

Please Note:

All content posted on the NetApp Community is publicly searchable and viewable. Participation in the NetApp Community is voluntary.

In accordance with our Code of Conduct and Community Terms of Use, DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information (PII)
  • Copyrighted materials without the permission of the copyright owner

Continued non-compliance may result in NetApp Community account restrictions or termination.

Replies

Check out the document 'Cisco, NetApp, VMWare Enhanced Secure Multi-Tenancy Design Guide' in NetApp Community.

Introduction

Goal of This Document
Cisco®, VMware®, and NetApp® have jointly designed a best-in-breed Enhanced Secure Multi-Tenancy
(ESMT) Architecture and have validated this design in a lab environment.

This document describes the design of and the rationale behind the Enhanced Secure Multi-Tenancy Architecture. The design includes many issues that must be addressed prior to deployment as no two environments are alike. This document also discusses the problems that this architecture solves and the four pillars of an Enhanced
Secure Multi-Tenancy environment.

Audience

The target audience for this document includes, but is not limited to, sales engineers, field consultants,
professional services, IT managers, partner engineering, and customers who wish to deploy an Enhanced
Secure Multi-Tenancy (ESMT) environment consisting of best-of-breed products from Cisco, NetApp,
and VMware.

Objectives

This document is intended to articulate the design considerations and validation efforts required to
design, deploy, and backup Enhanced Secure Multi-Tenancy virtual IT-as-a-service.

Just follow this link to see document 'Cisco, NetApp, VMWare Enhanced Secure Multi-Tenancy Design Guide'
http://communities.netapp.com/docs/DOC-8314

Public