ONTAP Hardware
ONTAP Hardware
Hello All
I have a new FAS8040, we are previous N-Series customers. So this is my first true NETAPP filer. I am having problems with getting the e0e 10G ethernet to work with
trunked data.
The network defination is:
interfaces xe-0/0/16
description "hur/Server/#+acc-e-server+#";
no-traps;
ether-options {
no-auto-negotiation;
no-flow-control;
link-mode full-duplex;
speed {
10g;
}
}
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members [ 2120 2141 2270 2758 2764 2795 ];
}
native-vlan-id 999;
}
interfaces xe-0/0/17
description "hur/Server/#+acc-e-server+#";
no-traps;
ether-options {
no-auto-negotiation;
no-flow-control;
link-mode full-duplex;
speed {
10g;
}
}
unit 0 {
family ethernet-switching {
port-mode trunk;
vlan {
members [ 2120 2141 2270 2758 2764 2795 ];
}
native-vlan-id 999;
}
I have setup VLANS and broadcast domains all in the same manner:
e0e Default Default up 1500 auto/10000 healthy
e0e-2120 Default Bcast-2120 up 1500 auto/10000 healthy
e0e-2141 Default Bcast-2141 up 1500 auto/10000 healthy
e0e-2270 Default Bcast-2270 up 1500 auto/10000 healthy
e0e-2758 Default Bcast-2758 up 1500 auto/10000 healthy
e0e-2764 Default Bcast-2764 up 1500 auto/10000 healthy
e0e-2795 Default Bcast-2795 up 1500 auto/10000 healthy
I then created test SVM on each of the VLAN's to test. The results where:
VLAN Ping Test
2120 FAIL
2141 FAIL
2764 FAIL
2758 FAIL
2270 FAIL
2795 PASS
So they all failed except the last defined VLAN 2795. My network team seem confident
the network is OK.
When I do an pktt trace on the failing VLAN ports I see the outbound traffic but zero inbound.
All routing looks OK and 'vlan stat' also shows no clues.
I am going back again to our network team to ask for a re-check. However I thought a forum
update may not be a bad idea either.
Is there any special NETAPP end setup I missed to allow trunking of VLAN's ?
Many Thanks Andy Parker...
Solved! See The Solution
I have solved this myself the solution was to turn on lldp on both cluster nodes:
run -node node_name options lldp.enable on
The 10G switch we were using is:
Juniper
EX4550-32F
running
12.3R7.7
If anyone would like to comment on why this was so successfully in fixing my issue, I would be very interested.
I am guessing it speeds up considerably the port status and availablity information.
Rgds AndyP
I have solved this myself the solution was to turn on lldp on both cluster nodes:
run -node node_name options lldp.enable on
The 10G switch we were using is:
Juniper
EX4550-32F
running
12.3R7.7
If anyone would like to comment on why this was so successfully in fixing my issue, I would be very interested.
I am guessing it speeds up considerably the port status and availablity information.
Rgds AndyP