ONTAP Hardware

FAS2240 FC Support

Mhandley
8,528 Views

Hi All,

  I have a FAS2240a-4 that we converted from 7-mode to cdot.  Everything went fine with the conversion.  We are running ONTAP 9.1P14. The issue I have is with the FC cards.   The system is configured with a X1150A-R6 2-port FC card on each node.  In unified ports, I see the FC ports and the sysconfig shows the card.  When I create my SVM for FC, it tells me the hardware configuration is not supported.  I checked the compatibility guide, and it shows the card is supported.  I'm trying to figure out where my issue is.  Has anyone run into this before after converting to CDOT? 

 

Thanks,

Mike

11 REPLIES 11

jcbettinelli
8,453 Views

Hello,

Do you have the license for FC?

Regards,

JC

Mhandley
8,449 Views

Hi JC,

Yep, it's installed and active.  

 

Thanks,

Mike

lovik_netapp
8,441 Views

How did you manage to convert to CDOT as for CDOT it needs 10G connectivity for cluster interconnect and the card you mentioned is FC card installed in only available slot of the hardware.

 

As per netapp you need x1160 card in order to use this hardware in CDOT and the mentioned card isn't supported.

 

the only option you have is to use iSCSI, sorry no FC on FAS2240 with CDOT

Mhandley
8,425 Views

 So leaving the nodes standalone isn't an option in this case.  This isn't going to be a production system, so is there any options available to me.  Could 2 of the 1Gb ports be used as cluster interconnect ports?  

AndreUnterberg
8,344 Views

Hi mhandley,

 

Yes you can use the 1GB ports as cluster interconnect . But keep in mind that every indirect IO will go over this ports and this can lead to performance impacts.

 

Kind regards

 

Andre

andris
8,306 Views

1GbE Cluster interconnects are NOT supported, as of ONTAP 8.2.

If you can tolerate the single point of failure, you can run only one 10GbE link between the two nodes (switchless) and use the 2nd 10GbE port for front-end connectivity.

Damien_Queen
8,214 Views

Officially it is not going to be supported, but it is going to work.

Use two or four 1Gbps onboard ports as the cluster interconnect.

 

Make sure that your LUNs always accessed by hosts by preferred paths through the controller which owns the LUNs, otherwise expect to have cluster interconnect as the bottleneck.

PS

Make sure your hosts connected to storage through FC switches, direct FC connection NOT going to work and there is no workaround.

Damien_Queen
8,212 Views

If it is possible, I would personally recommend using one 10Gbps port on each node as data ports for iSCSI/NFS/CIFS/NFS traffic. and another 10Gbps port on each node as an active cluster interconnect communication, while two or for 1Gbps ports as passive, backup cluster interconnect. This is the best case scenario for FAS2200 systems with cDOT.

FAS2240 cDOT cluster interconnectFAS2240 cDOT cluster interconnect

 

Mhandley
8,179 Views

Hi Guys,

  Thanks for the suggestions.  This won’t be a production system, so bottlenecks and performance isn’t an issue.  I configured e0c and e0d as cluster ports and connected them to each other (see output below).  I can ping between them just fine.  I have the SAS ports connected and the ACP ports.   When I login to each node, the other disk show up but in an unknown state.  I see this error in the log on both nodes:   cf.takeover.disabled: HA mode, but takeover of partner is disabled.

 

If I enable ha, I get this:

 

H-Netapp2240::cluster ha*> modify -configured true

 

Error: command failed: Cluster high-availability can only be enabled on a cluster with exactly two

       eligible nodes.

 

What am I missing? 

 


########################

NODE1

CH-Netapp2240::network interface> show

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

----------- ---------- ---------- ------------------ ------------- ------- ----

CH-Netapp2240

            CH-Netapp2240-01_mgmt1

                         up/up    10.99.0.123/24     CH-Netapp2240-01

                                                                   e0M     true

            cluster_mgmt up/up    10.99.0.128/24     CH-Netapp2240-01

                                                                   e0M     true

Cluster

            CH-Netapp2240-01_Cluster1

                         up/up    169.254.20.162/16  CH-Netapp2240-01

                                                                   e0c     true

            CH-Netapp2240-01_Clusters

                         up/up    169.254.38.231/16  CH-Netapp2240-01

                                                                   e0d     true

4 entries were displayed.

 

NODE2

CH-Netapp2240::network interface> show

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

----------- ---------- ---------- ------------------ ------------- ------- ----

CH-Netapp2240

            CH-Netapp2240-01_mgmt1

                         up/up    10.99.0.124/24     CH-Netapp2240-02

                                                                   e0M     true

            cluster_mgmt up/up    10.99.0.127/24     CH-Netapp2240-02

                                                                   e0M     true

Cluster

            CH-Netapp2240-02_Cluster1

                         up/up    169.254.110.83/16  CH-Netapp2240-02

                                                                   e0c     true

            CH-Netapp2240-02_Cluster2

                         up/up    169.254.30.207/16  CH-Netapp2240-02

                                                                   e0d     true

4 entries were displayed.

 

Damien_Queen
7,726 Views

Probably you need to disable ha in maintenance mode & re-initialize your system (all data will be destroyed!).

Destroy all the aggregates and remove all the disk ownership.

> ha-config modify controller non-ha
> ha-config modify chassis non-ha

 

andris
7,713 Views

DO NOT modify to non-ha in maintenance mode.

You are running a 2-in-1 chassis HA pair and that is the mode you should stay in.

 

Looks like "storage failover" HA is not happy. Try toggling it (disable/enable).

It should automatically set "cluster ha" to true. If not, try to enable it manually, again.

 

Worst case, halt both nodes and verify disk ownership is as expected from maintenance mode.

 

Better yet, open a case with NetApp Support.

Public