Subscribe
Accepted Solution

HA failover configuration

Hi,

I bought my first Netapp FAS2554 with two controllers and 12 HDDs


The system that we are developing needs to access file system through NFS.

 

My idea is to export folder from Netapp and mount from Redhat Server.

 

The doubts is regarding the network configuration. The first idea that I had was to create a LACP between one ethernet port of each controller. (I readed that is not possible).

 

I see that each controller has own hard drives, I need to protect against the controller fail.

 

Ex:
I set e0c for external access IP 192.168.0.2 at first controller. If the first controller fail how can I configure the system to allow access to the disks managed by first controller with the second one? Should I set the same IP at both controllers and plug to the same switch?

 

I will be pleased with any advice about this

 

Re: HA failover configuration

Simplest way,

 

-NAS lifs can migrate between nodes in a cluster

 

-create your nfs lif on e0c and assign ip to the lif

 

-cable port e0c on partner node (ctrl2)

 

-create a failover group (cli) that has port e0c from each controller-once this has been created, your nfs lif will only be able to move between these two ports.

 

-manually migrate the lif to verify access via both ports

 

 

 

You could create an interace group on each node to providde some local network resiliency:

 

- interface group on node1 - ig = a0a - consisting of e0c & e0d (local nic redundancy)

- interface group on node2 - ig = a0a - consisting of e0c & e0d (local nic redundancy) 

 

create a failover-group called nfs and add both of the above interface groups into the failover group - this ensures that any lif created on these can only failover to the specified interface group.

 

-create nfs lif with your ip and have it reside on one of the interfaces created above

 

- if a controller fails, the nfs lif will simply migrate to the partner interface group as defined in the failover group