2016-03-07 02:24 AM
i have a FAS-8020 that has 2 network ports (e0e, e0f)
I'm only using 1Gbs switchs
I'd like to aggregate these 2 links and also be connected on two switchs. it seems i can not do that on switch that are not stacked.
-> if i'm wrong and you have an idea, i'll take it !
Then my idea is to do like i did on 2040 7-mode FAS
- having 4 network ports
- 2 as a vif using LACP : vif1, connected to switch 1
- 2 as a vif using LACP : vif2, connected to switch 2
- create second level vif with vif1 and vif2 where only one is actif.
In fact that :
vif create lacp VIFC2-1 -b ip e0b e0a vif create lacp VIFC2-2 -b ip e0c e0d vif create single VIFC2 VIFC2-2 VIFC2-1
I'm reading network guide for cDOT 8.3 and i do not find something like that ?
Can you give me some clue to do that,p please ?
You can also give me what you think the best to achieve link aggregate+switch failover !
Solved! SEE THE SOLUTION
2016-03-07 06:35 AM
# create the ifgrp net port ifgrp create -node <node> -ifgrp a0a -distr-func port -mode singlemode # add ports to it net port ifgrp add-port -node <node> -ifgrp a0a -port e0e net port ifgrp add-port -node <node> -ifgrp a0a -port e0f # from this point it is treated as though it's a regular interface... # change the mtu net port modify -node <node> -port a0a -mtu 9000 # create a VLAN net port vlan create -node <node> -vlan-name a0a-xxx
2016-03-07 11:21 AM
2016-03-08 12:02 AM - edited 2016-03-08 12:03 AM
Andrew > yes i saw that, but i'll be able to apply that only on ports. Not on aggregated ports like in my sample, am i right ? Here i'd like to have 2 aggregates, then do a single mode ifgrp on these two aggregates. When reading cDOT 8.3 docs they do not advice to use signe mode ifgrp
Aborzenkov > Ok, i found things about failover LIF. This seems to be the right thing ! But you say it is active/active ? Does it mean that i can have one port on switch 1, the other on switch 2 and both be active ?
I do not really imagine how to use them. I'll search into a NetApp docs i found "cDot Network management guidec" and see what i can understand now that i know what i'm searching !
If you have other links or more info about that, i'll take it.
2016-03-08 01:04 AM
You can still accomplish the same objective, its just not called a multi-level vif anymore. Now its a failover group, and its much better. You make the two interface groups (per node), each with lacp to its own upstream switch for link aggregation, then you add those interface groups into the same broadcast domain. The ports within a broadcast domain form a system managed failover group. If a switch goes down, the lifs fail over to a surviving interface group. You have the option of configuring lifs to auto-revert when the interface group comes back.
The end result is still link aggregation with switch failover, but now you can run active traffic on all the ifgrps, and lifs can fail over to any surviving interface group (or port) in the cluster that is part of the same broadcast domain.
But how can you take advantage of that when you only have 2 ports per node at your disposal?
One option is to simply forgoe link aggregation, and place the individual ports into the broadcast domain. All ports are active, and If a switch goes down lifs fail over to surviving ports on the other switch.
Another option is to build an LACP group on each node, each going to different switches. If a switch goes down the lifs fail over to the surviving interface group on the other node. This is a very risky design, because if you lose a switch and the wrong node goes down for a reboot there is nowhere for the traffic to go and you will have an outage. I only point this one out to discourage someone else from coming up with the idea.
A third option is to use single mode interface groups. If a switch goes down, the interface group fails over to the other switch. This is less disruptive to the lif but doesn't improve throughput to the lif compared to opton 1.
Protocols and use cases are part of the equation as well. An ISCSI environment would favor option 1 since SAN lifs don't fail over anyway. On the other hand a heavy CIFS environment would favor using interface groups.
I usually settle on option 1 in this scenario.
If you really want link aggregation with switch failover you could add 1gb cards to the controllers if you have available slots. Barring that, stack the switches.
2016-03-08 02:52 AM - edited 2016-03-08 06:21 AM
Hi Sean and many thanks for this great reply.
I had time to read things about failover groups and yes this seems to be really OK.
You're right, i could simpy add one or two 1Gbps interfaces (e0c and e0d are empty, i could add SFP) and have failover of existing 2x1Gb (e0e and e0f) LACP to this link.
One thing i do not understand in your reply : One option is to simply forgoe link aggregation, and place the individual ports into the broadcast domain. All ports are active, and If a switch goes down lifs fail over to surviving ports on the other switch.
In this case you say that all ports are active : does it means they'll forward data traffic ? Then i do not need aggregation ? Too beautiful to be true ! I guess i misunderstand something there
In fact i can tell you more about the real install, maybe this could give another solution ?
This will be strech metrocluster with two FAS8020 (with only port e0e and e0f, and one controler) and two DS2246.
Two switchs, one 8020 and one 2246 in each room.
NFS protocol used by ESX servers
I understand i can forget "aggregation + using both switch" if i can not stack switchs...
But is your second solution so horrible than that ? This could apply here ?
Thanks a lot for your help
2016-03-08 07:42 AM - edited 2016-03-08 07:44 AM
Its important to understand that while all ports within the failover group can actively be serving data, any given lif can only utilize the port it is currently running on. This works out pretty well in a VMware on NFS deployment. In this case you have at least two datastores per node. Create a different lif for each datastore, and home those lifs on different ports within the failover group. During normal operations each datastore is utilizing a different 1gb link, but if an upstream switch is down all of the datastores stay online while sharing the surviving links. You can configure them to auto-revert to their designated home ports so things return to normal when the switch comes back up.
2016-03-08 07:47 AM
2016-03-08 09:02 AM
Ok, many thanks to both of you.
Despite my readings this is what i did not understood : a failover lif can use a port used by another lif as a normal port!! I thought i should use only "unused" port into a failover lif.
Really great !
Then now my understanding is :
datastore 1 -> lif1 -> port e0e and failover lif uses port e0f
datastore 2 -> lif2 -> port e0f and failover lif uses port e0e
This is not aggregate but this will by the way increase throughput. For me this is as good.
Can you confirm my understanding ?
I'll have questions about strech metrocluster network, but will open a new thread.
2016-03-08 04:37 PM
I think you've got the idea now. Many lifs can share a physical port, and they can move around the cluster as needed. Not just in a failover scenario. Moving them around is another way you can manage the traffic distribution on your physical ports. We like to keep the lif on the same node that owns the data its being used to access ( a direct path ), but if the data lives on a different node or the lif has failed over to a different node we always have indirect access available accross the cluster network.