ONTAP Discussions

e0M High Availability

JALESLIEFEIM
3,589 Views

Hi, I have a quick design question for everyone - we're setting up a new 8020 and I'm debating where to put our management interfaces.  We have 4 10Gbps ports that will serve all data to clients and 2 1Gbps ports that we don't have current plans for.  I've setup the SP port, and the design question is whether I should put the management interface on the SP/e0M port or should I bond the 2 1Gbps ports and put it there?  It seems like NetApp pushes for e0M, but it seems like a huge single point of failure (if that port goes down, you lose your SP and management interfaces), which is why I'm considering the bonding of the other 2 1Gbps ports for a dedicated management interface.  Does that make sense or am I missing something (aside from the potential waste of dedicating two ports to just management)?

Thanks.

4 REPLIES 4

JGPSHNTAP
3,589 Views

Over all my years of dealing with netapp, I've had a hard time determining

the value add's for E0m at all. It's been excluded in all my designs.

SP, is a must, but you will have better success bonding the 1gb's.

But if your 4 10g ports are serving all the same VLAN's, I would build 2 x

10gb lacp vifs with a single-mode on top of it. I would leave the 1gb's

out of it. Thats just me,

JALESLIEFEIM
3,589 Views

Thanks a lot for the response - we divide up the 10Gbps ports with 2x10Gbps in a lacp ifgrp for iSCSI/NFS traffic (over multiple private VLANs) and 2x10Gbps in a lacp ifgrp for client facing/SnapMirror traffic (over one public VLAN, maybe more later).  We'll then have the 2x1Gbps in a lacp ifgrp for management (over one public VLAN that's not on the other public ifgrp).  I want to avoid having inactive ports, so I don't think we'll do single mode, but I hadn't thought of putting the management VLAN on the 10Gbps ports, so I'll give that some thought.  Thanks for the idea!

JGPSHNTAP
3,589 Views

I like the idea of snapmirror over 10g for sure. I understand the other

iscsi/nfs vif. As for the 1gb, i'm still not seeing value add here.

MATTHEW_BURCHETT
3,589 Views

What I did for our cluster is assign the node-mgmt role to e0a and (as it is by default) e0M.  I created a failover group with e0a being primary and e0M as secondary.  This gives me management redundancy and makes the 100Mbs e0M interface the backup.  I mean, it is just your management traffic.  10Gbs and bonding the 1Gbs ports is just overkill.  I am using e0b as a cluster-mgmt port on all nodes with another failover group to ensure that I never lose cluster management access.

Public