FlexPod Discussions
FlexPod Discussions
Hi all,
Anyone with any thoughts re this:
Are there any references or documentation describing this? I have a slight concern that we will have UCS FIC modules across two sites on the same FC fabric (A & B respectively) & I’m not 100% sure they will behave nicely.
Thanks,
Radek
Solved! See The Solution
Radek,
We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.
We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.
Please let me know if you have any other questions.
Dave
Radek,
We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.
We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.
Please let me know if you have any other questions.
Dave
Hi Dave,
Many thanks for your response - this helps a lot!
Interestingly enough, there are N7k's as well in this environment (northbound of N5k's), so OTV may be on the cards as well at some point.
Regards,
Radek
Hi Dave,
any info with regards to when this CVD would be available?
many thanks,
alice
Alice,
We are targeting Sept/October, but of course timelines are always subject to change.
Hi Klem,
I was wondering whether you were any closer with this? Or if you could at least answer a question regarding the FMC required switches. Do they have to come from Cisco or does it not matter if they are Brocade 6510s? Will this depend on whether it is a completely new purchase or an existing FMC that is being upgraded to a Flexpod? I have been approached around a sizeable opportunity involving both of these solutions and want to make sure I get it right.
Thanks for taking the time with this.
Ruairi
The FlexPod Datacenter with NetApp MetroCluster validated design is targeted to publish October 23. It will be hosted on the Cisco.com DesignZone page and I'll have links to it from netapp.com/FlexPod as well as FlexPodPartners.com.
Hi Michael,
has the CVD been published meanwhile? I cannot find it on the FlexPod websites.
Thanks, Tobi
We unfortunately had a small internal delay in the release of this document. The current targeted date is now Nov 18th.
Hi Dave,
In the CVD - is the FCoE fabric stretched between sites via OTV?
Thanks,
Radek
Radek,
No, we use dark fiber between the sites for FCoE traffic. (FCoE cannot traverse an OTV link)
Dave
Hi, any news on a date for publishing this CVD?
Thanks
Richard
Its available now on cisco' site.
R
Thanks
Ruairi
Ruairi McBride
Consultant - Arrow ECS Services
Direct: +44 (0) 7432 626451
Mobile: +44 7432 626451
Email: ruairi.mcbride@arrowecs.co.uk<mailto:ruairi.mcbride@arrowecs.co.uk>
Great, thanks. I was looking on Netapp's site, which doesn't have it listed yet!
No problem.
I heard at insight last week that Cisco is going to move to a quarter yearly release of CVDs and they hope to have several more out in Jan 14.
Thanks
Ruairi
Ruairi McBride
Consultant - Arrow ECS Services
Direct: +44 (0) 7432 626451
Mobile: +44 7432 626451
Email: ruairi.mcbride@arrowecs.co.uk<mailto:ruairi.mcbride@arrowecs.co.uk>
Here's a link to the published CVD. Thanks everyone for your patience!
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_n7k_metrocluster.html
Interesting design, so would there be a minimum number of CNAs in the filers? I'm thinking would you need dedicated ports for the FCoE traffic, one to each fabric? And then say a two port lacp ifgrp, doing vPC on the nexus side to the dual N5K's at each site for the ethernet traffic? Or isn't there any limitation at the NetApp end from this perspective?
Yes, 2 FCoE ports will be the minimum required for FCoE boot, one to each fabric like you mentioned. If card redundancy is a requirement, you'd need 2 CNAs per controller, along with FC-VI cards for MetroCluster, 10GbE cards for front end traffic (our testing will involve those ports in an IFGRP), and any other I/O cards required for your environment.
you'd need 2 CNAs per controller [...], 10GbE cards for front end traffic
Okay, I thought just 2 CNAs per controller can do the trick & run both FCoE *and* 10GbE - that's the whole point of CNA/UTA, isn't it?
Yes, generally that is the case. Our Nexus 5k versions of FlexPod use this particular configuration.
However, Cisco requires all (FCoE) storage traffic to be located in a separate VDC from Ethernet traffic on the Nexus 7000 platform. Since VDC are defined on a per-port basis, you'll need separate ports for FCoE traffic vs. Ethernet traffic, which is why we require additional adapters.
See the following illustration in our most recent N7K CVD that shows this setup.
Also, here's a link to the VDC description in the Design Guide.
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_N7k_fcoe_design.html#wp508023
Thanks again - this explains a lot!