FAS and V-Series Storage Systems Discussions

FAS2240 FC Support

Hi All,

  I have a FAS2240a-4 that we converted from 7-mode to cdot.  Everything went fine with the conversion.  We are running ONTAP 9.1P14. The issue I have is with the FC cards.   The system is configured with a X1150A-R6 2-port FC card on each node.  In unified ports, I see the FC ports and the sysconfig shows the card.  When I create my SVM for FC, it tells me the hardware configuration is not supported.  I checked the compatibility guide, and it shows the card is supported.  I'm trying to figure out where my issue is.  Has anyone run into this before after converting to CDOT? 

 

Thanks,

Mike

11 REPLIES 11

Re: FAS2240 FC Support

Hello,

Do you have the license for FC?

Regards,

JC

Re: FAS2240 FC Support

Hi JC,

Yep, it's installed and active.  

 

Thanks,

Mike

Re: FAS2240 FC Support

How did you manage to convert to CDOT as for CDOT it needs 10G connectivity for cluster interconnect and the card you mentioned is FC card installed in only available slot of the hardware.

 

As per netapp you need x1160 card in order to use this hardware in CDOT and the mentioned card isn't supported.

 

the only option you have is to use iSCSI, sorry no FC on FAS2240 with CDOT

Re: FAS2240 FC Support

 So leaving the nodes standalone isn't an option in this case.  This isn't going to be a production system, so is there any options available to me.  Could 2 of the 1Gb ports be used as cluster interconnect ports?  

Re: FAS2240 FC Support

Hi mhandley,

 

Yes you can use the 1GB ports as cluster interconnect . But keep in mind that every indirect IO will go over this ports and this can lead to performance impacts.

 

Kind regards

 

Andre

Re: FAS2240 FC Support

1GbE Cluster interconnects are NOT supported, as of ONTAP 8.2.

If you can tolerate the single point of failure, you can run only one 10GbE link between the two nodes (switchless) and use the 2nd 10GbE port for front-end connectivity.

Re: FAS2240 FC Support

Officially it is not going to be supported, but it is going to work.

Use two or four 1Gbps onboard ports as the cluster interconnect.

 

Make sure that your LUNs always accessed by hosts by preferred paths through the controller which owns the LUNs, otherwise expect to have cluster interconnect as the bottleneck.

PS

Make sure your hosts connected to storage through FC switches, direct FC connection NOT going to work and there is no workaround.

Re: FAS2240 FC Support

If it is possible, I would personally recommend using one 10Gbps port on each node as data ports for iSCSI/NFS/CIFS/NFS traffic. and another 10Gbps port on each node as an active cluster interconnect communication, while two or for 1Gbps ports as passive, backup cluster interconnect. This is the best case scenario for FAS2200 systems with cDOT.

2240 cdot.PNGFAS2240 cDOT cluster interconnect

 

Re: FAS2240 FC Support

Hi Guys,

  Thanks for the suggestions.  This won’t be a production system, so bottlenecks and performance isn’t an issue.  I configured e0c and e0d as cluster ports and connected them to each other (see output below).  I can ping between them just fine.  I have the SAS ports connected and the ACP ports.   When I login to each node, the other disk show up but in an unknown state.  I see this error in the log on both nodes:   cf.takeover.disabled: HA mode, but takeover of partner is disabled.

 

If I enable ha, I get this:

 

H-Netapp2240::cluster ha*> modify -configured true

 

Error: command failed: Cluster high-availability can only be enabled on a cluster with exactly two

       eligible nodes.

 

What am I missing? 

 


########################

NODE1

CH-Netapp2240::network interface> show

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

----------- ---------- ---------- ------------------ ------------- ------- ----

CH-Netapp2240

            CH-Netapp2240-01_mgmt1

                         up/up    10.99.0.123/24     CH-Netapp2240-01

                                                                   e0M     true

            cluster_mgmt up/up    10.99.0.128/24     CH-Netapp2240-01

                                                                   e0M     true

Cluster

            CH-Netapp2240-01_Cluster1

                         up/up    169.254.20.162/16  CH-Netapp2240-01

                                                                   e0c     true

            CH-Netapp2240-01_Clusters

                         up/up    169.254.38.231/16  CH-Netapp2240-01

                                                                   e0d     true

4 entries were displayed.

 

NODE2

CH-Netapp2240::network interface> show

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

----------- ---------- ---------- ------------------ ------------- ------- ----

CH-Netapp2240

            CH-Netapp2240-01_mgmt1

                         up/up    10.99.0.124/24     CH-Netapp2240-02

                                                                   e0M     true

            cluster_mgmt up/up    10.99.0.127/24     CH-Netapp2240-02

                                                                   e0M     true

Cluster

            CH-Netapp2240-02_Cluster1

                         up/up    169.254.110.83/16  CH-Netapp2240-02

                                                                   e0c     true

            CH-Netapp2240-02_Cluster2

                         up/up    169.254.30.207/16  CH-Netapp2240-02

                                                                   e0d     true

4 entries were displayed.

 

Forums