Network and Storage Protocols

2240 HA setup

VIVISIMOIT
9,968 Views

hi, i am setting up a 2240-4 with dual heads 24 2TB SATA drives and have some questions.  i worked with netapps a few years ago but never in a dual head environment.  can someone please help or point me to a good doc for this.  specifically

i assume i set up a different IP for each head and have the failover on each point to the other?

should i do active/passive?

how do i set up the disks?  do i assign all disks to one head?

how about the raid group sizes?  it looks like 20 is the max?

any help is appreciated.

13 REPLIES 13

aborzenkov
9,929 Views

Each head is completely independent server which means it must have at least root volume (and hence aggregate and disks where this root volume is located) as well as own IP address(es). Active/passive is meaningless when applied to a head; both heads are up and running and serve data if you have configured them to do it. Each resource (share, LUN) is indeed active/passive - it is served by one head at a time only. If one head fails, another one takes over all resource and continues to serve partner data (if properly configured )

VIVISIMOIT
9,929 Views

that makes sense thanks.  so best practice would i:

set up headA with 1 aggregate (aggr0) and increase the max raid size to 20 and leave one spare?

set up headB with 1 aggregate (aggr0) with default raid size of 14 and default aggr0 size of 3 for vol0 and add the last disk as a spare for that aggr0 ?

would this allow full failover in the event of a head failure?

does the config sync between heads?

thanks

VIVISIMOIT
9,929 Views

or to respond to my own question, would it be better to set it up like this

headA - 11 disks in aggr0 with 1 spare

headB - same

then share out the nfs/cifs/iscsi shares separately under different names like headA:/vol/vol-iscsi1, headA:/vol/vol-nfs1, headA:/vol/vol-cifs1 and the same for headB?

sorry for the basic questions, i'm just getting up to speed on these things.

Willybobs27
9,929 Views

Yes I'd go with 12 disks assigned to each head. 11 in an aggregate and 1 spare

VIVISIMOIT
9,929 Views

i set the 2240 up as above with 12 disks in each and followed the HA guide for setting it up.  one thing that is confusing to me is the clustering/HA features.  i think HA is working properly and when i run the ha-config-check.cgi utility it runs clean.  however for testing it as per the doc, the storage failover subset of commands is not available.  is this not included in the 2240 or is it a separate license?  when rebooting one of the heads, the other seems to pick up ok so i'm not sure if i need to do anything else for HA on this unit.

rdenyer001
9,929 Views

For full multipath  failover  you need to make sure that you cable the  SAS ports on each controller to each other.

Extract from  NetApp SAS Cabling Guidelines.pdf  https://library.netapp.com/ecm/ecm_get_file/ECMM1280392

2240 systems in an HA pair with no external SAS storage may use single-path HA,
which requires no external cabling
. However,the recommended configuration is to enable Multipath HA for the internal disks by connecting the SAS ports of one controller module to the SAS ports of the partner.

VIVISIMOIT
9,929 Views

i have the external sas cables connected as per the drawing and the acp ports cross connected

aborzenkov
9,929 Views

What do you mean under "storage failover subset of commands"? Showing exact command invocation is better than providing long description.

VIVISIMOIT
9,929 Views

what i mean is that there is no "storage failover" command as referenced in the HA guide for testing failover.

netapp1a> storage failover
storage: unrecognized command "failover"
usage: storage <subcommand>
subcommands are:
        alias [ <alias> { <electrical_name> | <wwn> } ]
        disable adapter <name>
        download shelf {channel | shelf}
        download shelf -R to revert firmware to the version shipped with the current Data ONTAP version

        download acp [<adapter_name>.<shelf_id>.<module_number>]        download acp -R to revert firmware to the version shipped with the current Data ONTAP version

        enable adapter <name>
        help <sub_command>
        show adapter [ -a ] [ <name> ]
        show disk [ -a | -x | -p | -T ] [ <name> ]
        show expander [ -a ] [ <expander-name> ]
        show bridge [ -v ] [ <bridge-name> ]
        show fabric
        show fault [ -a ] [ -v ] [ <shelf-name> ]
        show hub [ -a ] [ -e ] [ <hub-name> ]
        show initiators [-a]
        show mc [ <name> ]
        show port [ <name> ]
        show shelf [ -a ] [ -e ] [ <shelf-name> ]
        show switch [ <name> ]
        show tape [ <name> ]
        show tape supported [ -v ]
        show acp [ -a ]
        stats tape <name>
        stats tape zero <name>
        unalias { <alias> | -a | -m | -t }

        array remove <array-name>
        array modify <array-name> [-m <model>] [-n <new_name>] [-v vendor] [-p <prefix>] [-o options]
        array remove-port <array-name> -p <WWPN>
        array show [<array-name>]
        array show-ports [<array-name>]
        array show-luns <array-name> [-a] [-p <WWPN>]
        array show-config [-a]
        array purge-database

        load balance

aborzenkov
6,645 Views

I have never heard about such command; maybe it is cluster mode related, dunno.

VIVISIMOIT
6,645 Views

ok, so one more question.

the default exports file entry for vol0 is to have it be read-only for all hosts.  i'm thinking this is not necessary.  any implications of taking that away?

akw_white
6,646 Views

Removing that export should be fine, it's purely for administrative access.

infinitiguy
6,645 Views

This is an old thread - so I assume you've learned, but I wonder if what you were looking for was cf takeover and cf giveback.

Public