ONTAP Discussions
ONTAP Discussions
Hello All,
Quick question. I have a FAS8200 series storage array that is in the process of being setup without professional services. I have configured ports e0a and e0b on both controlletrs and they are up, as you can see below.
Now when I connect the 40Gbe breakoput cable to my Cisco switch, I get no activity and only an orange light on the switch.
The ports on the array are green but no activity. Is there some trick to getting the interconnect ports to show some activity or is there something simple I'm missing?
Keep in my I have not ran the cluster wizards etup yet, so of coure no LIF has been configured.
All of the ports aon the interconnect are configured the exact same (as they are for our working cluster) and there is no difference.
Any thoughts/suggestions etc????
::> network port show
Node: localhost
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0M Default - up 1500 auto/1000 -
e0a Cluster - up 9000 auto/10000 -
e0b Cluster - up 9000 auto/10000 -
e0c Default - up 1500 1000/1000 -
e0d Default - down 1500 1000/- -
e0e Default - down 1500 1000/- -
e0f Default - down 1500 auto/- -
e1a Default - down 1500 1000/- -
e1e Default - down 1500 1000/- -
e2a Default - down 1500 auto/- -
e2b Default - down 1500 auto/- -
11 entries were displayed.
::>
::>
::>
::>
::> network port show
Node: localhost
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0M Default - up 1500 auto/1000 -
e0a Cluster - up 9000 auto/10000 -
e0b Cluster - up 9000 auto/10000 -
e0c Default - up 1500 1000/1000 -
e0d Default - down 1500 1000/- -
e0e Default - down 1500 auto/- -
e0f Default - down 1500 auto/- -
e1a Default - down 1500 1000/- -
e1e Default - down 1500 1000/- -
e2a Default - down 1500 1000/- -
e2b Default - down 1500 auto/- -
11 entries were displayed.
You need to boot in to maint mode and convert the port :
each 40G port will become 4x10G ports and you'll see the following in ONTAP: e1a e1b e1c e1d etc..
I'm not sure how this is going to help based upon my previous statement that I'm using ports e0a & e0b.
The ports are already set to 10G.
Also, the change you mentioned I already ran on ports e0a & e0b before I even made this post. So what other ports are you saying need to be changed to 10G and why would that matter, if they are not the ports I'm using for the interconnect?
Just to be more specific, since th cluster wizard has not been ran yet, I'm cxonnecting to the controllers by way of the serial (RJ-45) console port.
So with that connection I assume that is essentially maintenance mode already. So I ran the command below to make sure the ports were at 10G, although they already were just as you see them now.
network port modify -node localhost -port e0b -autonegotiate-admin true -duplex-admin auto -speed-admin auto
you asked "Now when I connect the 40Gbe breakoput cable to my Cisco switch, I get no activity and only an orange light on the switch. "
It's because they need to be converted to 10G ports before you can use the breakout cable.
"So with that connection I assume that is essentially maintenance mode already. " Nope, there is a boot option 5 in the boot menu that is Maint Mode.
e0a/b are the default cluster ports for this model. If you run the setup wizard (which you should do) it'll auto gen/config what's needed to create the cluster network from those ports.
Awww, I think I got what you mean now. You want me to actually chane the ports on the Cisco switch to 10G.
Well why didn't you just say that LOL
Just kidding.
Let me change those now on the Cisco switch.
Nothing on the switch side... If you want to use the Netapp 40G ports that on your system are in slot 1 and 2, e1a / e1e and e1a / e1e with the 40G breakout cable (breaks in to 4x10G LC LC) you need to convert the port via maint mode.
That's what I have been mentioning. I'm NOT using the slots cards. I have the breakout cable 40G connection to the Cisco Nexus 3k interconnect.
The 4-breakout 10G cables are connected directly to e0a & e0b on the controllers.
So are you saying that is the wrong setup? Should I be using the exx series slots? I'm not using those, I'm using e0a & e0b.
Sorry, that wasn't how I read it. It looked like e0a/e0b was up and you where trying to use the 40G ports on the netapp with breakout cables.
So youre using a Nexus 3232 as a cluster switch, and you're trying to connect e0a/b to it from each controller?
Correct that is what I'm trying to do.
1 QSFP going to the interconnect switch.
4 Breakout cables on the other end going to e0a/b on both controllers.
See here: https://library.netapp.com/ecm/ecm_download_file/ECMLP2552649
You have to convert those as well. If they say up though.. i'd think you should just be able to run through the cluster setup wizard.
"Note: You can break out the first six ports into 4x10 GbE mode by using the interface breakout module 1 port 1-6 map 10g-4x command. Similarly, you can regroup the first six QSFP+ ports from breakout configuration by using the no interface breakout module 1 port 1-6 map 10g-4x command."
Additional info: https://library.netapp.com/ecm/ecm_download_file/ECMP1115327
And just to clarify, you will need to take a breakout cable on each switch. You can't just run the whole cluster off a single 40G port on a single switch.
I will probably open a ticket. What you are asking me to do I have already done. I'm not using any other ports ecept e0a/b.
They are already properly set.
My mistake, yes, I do have 2 40GB cluster swith connections and on the other end of each one, 2x10G connections going to to e0a/b.
Past making the change on the ports as you can see below I'm not sure what else is missing. You are corect, the ports show "up", so I'm not sure why the cluster wizard is not working.
::> network port show
Node: localhost
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0M Default - up 1500 auto/1000 -
e0a Cluster - up 9000 auto/10000 -
e0b Cluster - up 9000 auto/10000 -
::> network port show
Node: localhost
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
e0M Default - up 1500 auto/1000 -
e0a Cluster - up 9000 auto/10000 -
e0b Cluster - up 9000 auto/10000 -
What data/time is showing on the new 8200?
and what ONTAP version.
Not the same date/time as the cluster I'm trying to join.
backdated to like 2018?
No, it's 2019.
Sometime before July 7th?
What version of ontap are the existing / new?
any errors called out when you try to join?
We are using 9.2P3 on both old and new.
Yes, the date is prior to July 7th.
It says it can't find the cluster.
Almost forgot, do you need to configure A LIF on the new ly added storage aray before you add it to an existing cluster?
I was told that the cluster setup does that for you or it's at least done during the setup, which I somehow don't believe but I could be wrong.
the cluster lifs and node mgmt are created during the wizard.
I was going to join the existing cluster by using the cluster name but since I'm new to NetApp I didn't want to do that, lest, I bring down the existing cluster by mistake.
We didn't have the ability for professional services on this install so that kind of doesn't help.
I would assume since I'm doing a "join" only and not a "create" then I should be fine and all that will happen is it joins the existing or fails to join the existing.