Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
Anyone with any thoughts re this:
- Dual site setup with NetApp Fabric MetroCluster (two storage controllers stretched over fiber links)
- Two Nexus 5000 switches at each site, so we have two FC fabrics, stretching both sites (as required for MetroCluster)
- To be introduced: UCS with dual Fabric Interconnect Modules at *each* site
Are there any references or documentation describing this? I have a slight concern that we will have UCS FIC modules across two sites on the same FC fabric (A & B respectively) & I’m not 100% sure they will behave nicely.
Thanks,
Radek
Solved! See The Solution
1 ACCEPTED SOLUTION
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Radek,
We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.
We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.
Please let me know if you have any other questions.
Dave
20 REPLIES 20
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Radek,
We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.
We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.
Please let me know if you have any other questions.
Dave
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dave,
Many thanks for your response - this helps a lot!
Interestingly enough, there are N7k's as well in this environment (northbound of N5k's), so OTV may be on the cards as well at some point.
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dave,
any info with regards to when this CVD would be available?
many thanks,
alice
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Alice,
We are targeting Sept/October, but of course timelines are always subject to change.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Klem,
I was wondering whether you were any closer with this? Or if you could at least answer a question regarding the FMC required switches. Do they have to come from Cisco or does it not matter if they are Brocade 6510s? Will this depend on whether it is a completely new purchase or an existing FMC that is being upgraded to a Flexpod? I have been approached around a sizeable opportunity involving both of these solutions and want to make sure I get it right.
Thanks for taking the time with this.
Ruairi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The FlexPod Datacenter with NetApp MetroCluster validated design is targeted to publish October 23. It will be hosted on the Cisco.com DesignZone page and I'll have links to it from netapp.com/FlexPod as well as FlexPodPartners.com.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Michael,
has the CVD been published meanwhile? I cannot find it on the FlexPod websites.
Thanks, Tobi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We unfortunately had a small internal delay in the release of this document. The current targeted date is now Nov 18th.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Dave,
In the CVD - is the FCoE fabric stretched between sites via OTV?
Thanks,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Radek,
No, we use dark fiber between the sites for FCoE traffic. (FCoE cannot traverse an OTV link)
Dave
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, any news on a date for publishing this CVD?
Thanks
Richard
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Its available now on cisco' site.
R
Thanks
Ruairi
Ruairi McBride
Consultant - Arrow ECS Services
Direct: +44 (0) 7432 626451
Mobile: +44 7432 626451
Email: ruairi.mcbride@arrowecs.co.uk<mailto:ruairi.mcbride@arrowecs.co.uk>
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Great, thanks. I was looking on Netapp's site, which doesn't have it listed yet!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
No problem.
I heard at insight last week that Cisco is going to move to a quarter yearly release of CVDs and they hope to have several more out in Jan 14.
Thanks
Ruairi
Ruairi McBride
Consultant - Arrow ECS Services
Direct: +44 (0) 7432 626451
Mobile: +44 7432 626451
Email: ruairi.mcbride@arrowecs.co.uk<mailto:ruairi.mcbride@arrowecs.co.uk>
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here's a link to the published CVD. Thanks everyone for your patience!
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_n7k_metrocluster.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Interesting design, so would there be a minimum number of CNAs in the filers? I'm thinking would you need dedicated ports for the FCoE traffic, one to each fabric? And then say a two port lacp ifgrp, doing vPC on the nexus side to the dual N5K's at each site for the ethernet traffic? Or isn't there any limitation at the NetApp end from this perspective?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, 2 FCoE ports will be the minimum required for FCoE boot, one to each fabric like you mentioned. If card redundancy is a requirement, you'd need 2 CNAs per controller, along with FC-VI cards for MetroCluster, 10GbE cards for front end traffic (our testing will involve those ports in an IFGRP), and any other I/O cards required for your environment.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you'd need 2 CNAs per controller [...], 10GbE cards for front end traffic
Okay, I thought just 2 CNAs per controller can do the trick & run both FCoE *and* 10GbE - that's the whole point of CNA/UTA, isn't it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, generally that is the case. Our Nexus 5k versions of FlexPod use this particular configuration.
However, Cisco requires all (FCoE) storage traffic to be located in a separate VDC from Ethernet traffic on the Nexus 7000 platform. Since VDC are defined on a per-port basis, you'll need separate ports for FCoE traffic vs. Ethernet traffic, which is why we require additional adapters.
See the following illustration in our most recent N7K CVD that shows this setup.
Also, here's a link to the VDC description in the Design Guide.
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_N7k_fcoe_design.html#wp508023
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks again - this explains a lot!