Dual site setup with NetApp Fabric MetroCluster (two storage controllers stretched over fiber links)
Two Nexus 5000 switches at each site, so we have two FC fabrics, stretching both sites (as required for MetroCluster)
To be introduced: UCS with dual Fabric Interconnect Modules at *each* site
Are there any references or documentation describing this? I have a slight concern that we will have UCS FIC modules across two sites on the same FC fabric (A & B respectively) & I’m not 100% sure they will behave nicely.
We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.
We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.
Please let me know if you have any other questions.
Interesting design, so would there be a minimum number of CNAs in the filers? I'm thinking would you need dedicated ports for the FCoE traffic, one to each fabric? And then say a two port lacp ifgrp, doing vPC on the nexus side to the dual N5K's at each site for the ethernet traffic? Or isn't there any limitation at the NetApp end from this perspective?
Yes, 2 FCoE ports will be the minimum required for FCoE boot, one to each fabric like you mentioned. If card redundancy is a requirement, you'd need 2 CNAs per controller, along with FC-VI cards for MetroCluster, 10GbE cards for front end traffic (our testing will involve those ports in an IFGRP), and any other I/O cards required for your environment.
Yes, generally that is the case. Our Nexus 5k versions of FlexPod use this particular configuration.
However, Cisco requires all (FCoE) storage traffic to be located in a separate VDC from Ethernet traffic on the Nexus 7000 platform. Since VDC are defined on a per-port basis, you'll need separate ports for FCoE traffic vs. Ethernet traffic, which is why we require additional adapters.
See the following illustration in our most recent N7K CVD that shows this setup.
I was wondering whether you were any closer with this? Or if you could at least answer a question regarding the FMC required switches. Do they have to come from Cisco or does it not matter if they are Brocade 6510s? Will this depend on whether it is a completely new purchase or an existing FMC that is being upgraded to a Flexpod? I have been approached around a sizeable opportunity involving both of these solutions and want to make sure I get it right.
The FlexPod Datacenter with NetApp MetroCluster validated design is targeted to publish October 23. It will be hosted on the Cisco.com DesignZone page and I'll have links to it from netapp.com/FlexPod as well as FlexPodPartners.com.