FlexPod Discussions

Cisco UCS and NetApp MetroCluster

radek_kubka
16,379 Views

Hi all,

Anyone with any thoughts re this:

  • Dual site setup with NetApp Fabric MetroCluster (two storage controllers stretched over fiber links)
  • Two Nexus 5000 switches at each site, so we have two FC fabrics, stretching both sites (as required for MetroCluster)
  • To be introduced: UCS with dual Fabric Interconnect Modules at *each* site

Are there any references or documentation describing this? I have a slight concern that we will have UCS FIC modules across two sites on the same FC fabric (A & B respectively) & I’m not 100% sure they will behave nicely.

Thanks,

Radek

1 ACCEPTED SOLUTION

klem
16,340 Views

Radek,

We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.

We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.

Please let me know if you have any other questions.

Dave

View solution in original post

20 REPLIES 20

klem
16,341 Views

Radek,

We have been testing many different configurations in our labs involving MetroCluster within FlexPod, and the configuration you describe is essentially the version we have settled on to validate and release over the next few months.

We'll leverage MetroCluster across the dual sites for redundancy and failover, as well as 2 pairs of FIs for the server components, one pair at each site. The only major change from what you describe is that we are validating the Nexus 7K for the networking components so we can use OTV for the seamless migration of VMs and to support multiple failover scenarios. The Nexus 5K will be 100% supported and discussed in the CVD, but our main validation will focus on the N7K because of these additional features.

Please let me know if you have any other questions.

Dave

radek_kubka
16,215 Views

Hi Dave,

Many thanks for your response - this helps a lot!

Interestingly enough, there are N7k's as well in this environment (northbound of N5k's), so OTV may be on the cards as well at some point.

Regards,

Radek

achau
16,210 Views

Hi Dave,

any info with regards to when this CVD would be available?

many thanks,

alice

klem
16,210 Views

Alice,

We are targeting Sept/October, but of course timelines are always subject to change.

rorzmcgauze
15,003 Views

Hi Klem,

I was wondering whether you were any closer with this? Or if you could at least answer a question regarding the FMC required switches. Do they have to come from Cisco or does it not matter if they are Brocade 6510s? Will this depend on whether it is a completely new purchase or an existing FMC that is being upgraded to a Flexpod? I have been approached around a sizeable opportunity involving both of these solutions and want to make sure I get it right.

Thanks for taking the time with this.

Ruairi

mharding
15,004 Views

The FlexPod Datacenter with NetApp MetroCluster validated design is targeted to publish October 23.  It will be hosted on the Cisco.com DesignZone page and I'll have links to it from netapp.com/FlexPod as well as FlexPodPartners.com.

btobias
15,004 Views

Hi Michael,

has the CVD been published meanwhile? I cannot find it on the FlexPod websites.

Thanks, Tobi

klem
15,004 Views

We unfortunately had a small internal delay in the release of this document. The current targeted date is now Nov 18th.

radek_kubka
15,005 Views

Hi Dave,

In the CVD - is the FCoE fabric stretched between sites via OTV?

Thanks,

Radek

klem
15,006 Views

Radek,

No, we use dark fiber between the sites for FCoE traffic. (FCoE cannot traverse an OTV link)

Dave

richard_sandlan
13,665 Views

Hi, any news on a date for publishing this CVD?

Thanks


Richard

rorzmcgauze
13,666 Views

Its available now on cisco' site.

R

Thanks

Ruairi

Ruairi McBride

Consultant - Arrow ECS Services

Direct: +44 (0) 7432 626451

Mobile: +44 7432 626451

Email: ruairi.mcbride@arrowecs.co.uk<mailto:ruairi.mcbride@arrowecs.co.uk>

Web: www.arrowecs.co.uk<http://www.arrowecs.co.uk>

richard_sandlan
13,666 Views

Great, thanks. I was looking on Netapp's site, which doesn't have it listed yet!

rorzmcgauze
13,666 Views

No problem.

I heard at insight last week that Cisco is going to move to a quarter yearly release of CVDs and they hope to have several more out in Jan 14.

Thanks

Ruairi

Ruairi McBride

Consultant - Arrow ECS Services

Direct: +44 (0) 7432 626451

Mobile: +44 7432 626451

Email: ruairi.mcbride@arrowecs.co.uk<mailto:ruairi.mcbride@arrowecs.co.uk>

Web: www.arrowecs.co.uk<http://www.arrowecs.co.uk>

klem
12,458 Views

Here's a link to the published CVD. Thanks everyone for your patience!

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_n7k_metrocluster.html

l_mitchell
16,209 Views

Interesting design, so would there be a minimum number of CNAs in the filers? I'm thinking would you need dedicated ports for the FCoE traffic, one to each fabric? And then say a two port lacp ifgrp, doing vPC on the nexus side to the dual N5K's at each site for the ethernet traffic? Or isn't there any limitation at the NetApp end from this perspective?

klem
16,209 Views

Yes, 2 FCoE ports will be the minimum required for FCoE boot, one to each fabric like you mentioned. If card redundancy is a requirement, you'd need 2 CNAs per controller, along with FC-VI cards for MetroCluster, 10GbE cards for front end traffic (our testing will involve those ports in an IFGRP), and any other I/O cards required for your environment.

radek_kubka
16,210 Views

you'd need 2 CNAs per controller [...], 10GbE cards for front end traffic

Okay, I thought just 2 CNAs per controller can do the trick & run both FCoE *and* 10GbE - that's the whole point of CNA/UTA, isn't it?

klem
16,210 Views

Yes, generally that is the case. Our Nexus 5k versions of FlexPod use this particular configuration.

However, Cisco requires all (FCoE) storage traffic to be located in a separate VDC from Ethernet traffic on the Nexus 7000 platform. Since VDC are defined on a per-port basis, you'll need separate ports for FCoE traffic vs. Ethernet traffic, which is why we require additional adapters.

See the following illustration in our most recent N7K CVD that shows this setup.

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_N7k_fcoe_Clusterdeploy.html#wp521935

Also, here's a link to the VDC description in the Design Guide.

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_N7k_fcoe_design.html#wp508023

radek_kubka
16,210 Views

Thanks again - this explains a lot!

Public