Tech ONTAP Articles
Tech ONTAP Articles
This article continues a recent series of Tech OnTap features on FCoE. Previous articles include “FCoE: The Future of Fibre Channel?” by Nick Triantos, which appeared in November 2008, and “How FCoE and iSCSI Fit into Your Storage Strategy” by Mike McNamara and Silvano Gai, which appeared in June 2009. Refer to those articles for additional background information. [Tech OnTap eds.] Many enterprise data centers use Ethernet networks for LAN and IP data traffic plus separate Fibre Channel (FC) networks for storage area network (SAN) traffic. The increased adoption of 10-Gigabit Ethernet (10GbE) in the data center, combined with the availability of Fibre Channel over Ethernet (FCoE) and new lossless 10GbE technologies, makes it possible to consolidate FC data flows with LAN and IP data traffic on the same Ethernet infrastructure. Network convergence enables you to preserve your existing investments in FC storage, reduce data center costs and complexity, and simplify network management. Although the benefits of using FCoE are compelling, many are still waiting to deploy the technology. This article addresses frequently asked questions about the technology and concludes with information on how you can make the move to FCoE using a gradual, phased approach. IT Challenges: Maintaining Multiple NetworksMost data centers maintain multiple networks for different purposes:
IP networks and FC SANs each play an essential role in the data center, but they differ in design and functionality. The two networks have their own security needs and traffic patterns, and use separate management toolsets. Each network is built and maintained on dedicated infrastructure, with separate cabling and separate network interfaces on each server and storage system. Managing two discrete networks increases the complexity and cost of your data center. Converging your Ethernet and FC networks can make your data center more efficient without sacrificing your investment in FC infrastructure. Fibre Channel over EthernetFCoE enables you to transmit IP and FC traffic on a single, unified Ethernet cable. In this way, the merged network can support LAN and SAN data types, reducing equipment and cabling in the data center while simultaneously lowering the power and cooling load associated with that equipment. There are also fewer support points when consolidating to a unified network, which helps reduce the management burden. FCoE is enabled by an enhanced 10GbE technology commonly referred to as data center bridging (DCB) or Converged Enhanced Ethernet (CEE). Tunneling protocols, such as FCiP and iFCP, use IP to transmit FC traffic over long distances, but FCoE is a layer-2 encapsulation protocol that uses Ethernet physical transport to transmit FC data. Recent advances and upcoming additions to the Ethernet standard, such as TRILL (see sidebar) and the ability to provide lossless fabric characteristics over a 10-Gigabit link, are what enable FCoE. FCoE delivers significant value to organizations that want to consolidate server I/O, network, and storage interconnects by converging onto a single network storage technology. For data centers with large investments, even the simplest reduction in the amount of equipment that has to be managed can reap significant benefits. And sharing the same network fabric—from server to switch to storage—removes the requirement of dedicated networks, significantly reducing TCO by preserving existing infrastructure investments and maintaining backward compatibility with familiar IT procedures and processes. FCoE ComponentsSome of the components needed to implement FCoE include:
Impact on Existing Servers, Networking, and StorageFCoE requires minimal changes to your existing IT infrastructure. It is a natural evolution of Fibre Channel technology, designed to carry data over Ethernet physical and data-link layers. Using Fibre Channel’s upper layers simplifies FCoE deployment by allowing coexistence with deployed FC SANs and enables you to leverage enterprise-proven Fibre Channel software stacks, management tools, and existing training. Most importantly, you don’t need to change your applications in order to benefit from the performance and potential cost benefits of FCoE. Organizational IssuesIn traditional data center environments, the storage group owns and operates the FC SAN while the networking group owns and operates the Ethernet LAN. Since the two groups have been historically separate, introducing FCoE into the data center may introduce beneficial changes to some IT practices. Cultural, political, and behavioral concerns in data center and provisioning paradigms can present organizational obstacles to FCoE adoption. Some new business processes and procedures may need to be implemented so that proper control mechanisms are in place for FCoE networks. Purchasing patterns may have to be modified and the reliability of Ethernet networks may have to be increased as well. With the convergence of FC and IP created by FCoE, these two traditionally separate network realms overlap. Implementing FCoE requires little if any additional IT training. FCoE leverages the existing IT expertise and skill sets of your IP data and FC teams. Role-based management features in management applications allow your FC group to continue owning and operating the SAN and your IP networking group to continue owning and operating the data network. Where to DeployWhile the benefits of using FCoE are certainly compelling, you may still be waiting to deploy the technology. Fortunately, FCoE convergence is not a disruptive process and does not require a “rip and replace” upgrade. Moving to FCoE can be done gradually, using a phased approach. Most early FCoE deployments will likely be part of new server deployments in Windows® and Linux® environments in which virtualized tier-3 and some tier-2 applications are deployed. Considering that FCoE is a relatively new technology, initial FCoE deployment is best suited for access-layer server I/O consolidation. As storage traffic requires the new lossless Ethernet, the 10GbE transport still requires Link Layer multipathing and multihop capabilities. Such features are currently under development, and should become available later in 2010. These capabilities will enable the deployment of larger FCoE networks, which will expand the reach of FCoE beyond access layer server connectivity and I/O consolidation. Best practices for determining where to deploy FCoE include:
How to BeginMigration to FCoE can be accomplished with a gradual, phased approach, typically starting at the edge or switch, then moving to native FCoE storage, and eventually going deeper into the corporate network. The following diagram depicts a typical data center architecture before network convergence begins. The FC SAN (illustrated by the orange line) is a parallel network requiring network ports and cabling over and above those required for the Ethernet IP LAN (illustrated by the blue line): Figure 1)Layout of a typical data center before implementing DCB/FCoE. Phase 1: Making the Transition to DCB/FCoE at the Edge or SwitchMoving to a converged or unified Ethernet infrastructure can be done gradually and will likely begin at the edge (illustrated by the green lines) where the greatest return on investment can be realized. With FCoE convergence, port count at the servers and edge switches can be reduced by half, driving significant capital and operational cost reductions as well as management improvements. Figure 2)Phase 1: Making the transition to FCoE at the edge or switch. Phase 2: Making the Transition to Native DCB/FCoE Storage SystemsMove to an end-to-end DCB/FCoE solution from the host to the network to native DCB/FCoE storage. The typical configuration has rack servers with CNAs connected to top-of-rack DCB/FCoE switches connected to unified storage that supports FCoE as well as other protocols. FCoE and converged traffic is supported throughout the infrastructure, providing optimal savings. Figure 3)Phase 2: Making the transition to native FCoE storage. Phase 3: Making the Transition to DCB/FCoE at the CoreAfter implementing FCoE at the edge or switch, enterprises can migrate to a comprehensive 10GbE-enhanced Ethernet network at the core (illustrated by the green lines) and then gradually move to storage that supports FCoE as well. The end goal is a 10GbE Ethernet infrastructure that supports multiple traffic types (FCoE, iSCSI, NFS, CIFS) from host to fabric to storage sharing the same Ethernet infrastructure. Figure 4)Phase 3: End-to-end FCoE, from edge to core to storage. ConclusionFCoE brings together two leading technologies—the Fibre Channel protocol and an enhanced 10-Gigabit Ethernet physical transport—to provide a compelling option for SAN connectivity and networking. To simplify administration and protect FC SAN investments, FCoE enables you to use the same management tools and techniques you use today for managing both your IP and FC storage networks. The benefits of converged networks will drive increased adoption of 10GbE in the data center. FCoE will fuel a new wave of data center consolidation as it lowers complexity, increases efficiency, improves utilization, and, ultimately, reduces power, space, and cooling requirements. If you are planning new data centers or are upgrading your storage networks, you should seriously consider FCoE. By taking a phased approach to consolidating your data centers around Ethernet, you can build out your Ethernet infrastructure over time while protecting existing FC infrastructure investments. Got opinions about FCoE? Ask questions, exchange ideas, and share your thoughts online in NetApp Communities. | Explore Converging on a Single Network Fabric The arrival of Fibre Channel over Ethernet and converged network adapters makes it possible to meet all your data center networking needs—LAN and SAN—using a single converged Ethernet fabric. Find out more about the forces fueling this convergence in two recent NetApp white papers: Fabric Convergence with Lossless Ethernet and Fibre Channel over Ethernet(PDF) Ethernet Storage(PDF) Learn More About FCoE Is it time to come up to speed on FCoE technology? A series of recent Tech OnTap articles can help get you there: FCoE: The Future of Fibre Channel Evaluation of NetApp Ethernet Storage Recently, NetApp® storage was evaluated for its ability to support "converged" networks running both IP storage protocols and FCoE. The report includes an overview of FCoE, along with details on the evaluation environment, test process, and results of IOmeter tests performing random and sequential reads and writes to LUNs. What Is TRILL? TRILL (Transparent Interconnection of Lots of Links) is a draft standard that is being developed by an Internet Engineering Task Force (IETF) work group. The goal is to develop a Link Layer (L2) shortest path routing protocol for multihop routing environments. TRILL will be compatible with 802.1 Ethernet environments that today use Spanning Tree Protocol (STP). TRILL will use link-state protocols for discovery and calculate shortest paths to form the routing tables of TRILL-capable routing bridges. The main benefits of TRILL for data centers are:
|
All content posted on the NetApp Community is publicly searchable and viewable. Participation in the NetApp Community is voluntary.
In accordance with our Code of Conduct and Community Terms of Use, DO NOT post or attach the following:
Continued non-compliance may result in NetApp Community account restrictions or termination.
Check out the document 'Cisco, NetApp, VMWare Enhanced Secure Multi-Tenancy Design Guide' in NetApp Community.
Introduction
Goal of This Document
Cisco®, VMware®, and NetApp® have jointly designed a best-in-breed Enhanced Secure Multi-Tenancy
(ESMT) Architecture and have validated this design in a lab environment.
This document describes the design of and the rationale behind the Enhanced Secure Multi-Tenancy Architecture. The design includes many issues that must be addressed prior to deployment as no two environments are alike. This document also discusses the problems that this architecture solves and the four pillars of an Enhanced
Secure Multi-Tenancy environment.
Audience
The target audience for this document includes, but is not limited to, sales engineers, field consultants,
professional services, IT managers, partner engineering, and customers who wish to deploy an Enhanced
Secure Multi-Tenancy (ESMT) environment consisting of best-of-breed products from Cisco, NetApp,
and VMware.
Objectives
This document is intended to articulate the design considerations and validation efforts required to
design, deploy, and backup Enhanced Secure Multi-Tenancy virtual IT-as-a-service.
Just follow this link to see document 'Cisco, NetApp, VMWare Enhanced Secure Multi-Tenancy Design Guide'
http://communities.netapp.com/docs/DOC-8314