VMware Solutions Discussions

FAS2040 Next Steps-Active/Active or Active/Passive

HEUSMANNBM
6,644 Views

I've got my filer setup currently with 8 internal drives, controller A root is installed on 3 drives with 1 spare and Controller B root is installed on 3 drives with 1 spare. Ontap version is 7.3.7. I've only configured each e0a interface with an IP and connected to my core switch, Cisco 3750G.

I have 1 DS14MK2 AT shelf with 500GB drives. Each controller is connected via fibre to both Controller A and Controller B of the filer.

The background and question: This is coupled with 2 HP servers in support of VMWare ESXi 5.1 Redundancy isn't a big concern. I don't see any shelves being added in the future, but that might change down the road. I was going to assign 7 disks per controller and enter cluster key however, can the system function in an Active/Passive mode having controller B take over servicing the entire shelf in the event Controller A dies? This would have all the shelf disks assigned to Controller A and B in a stand-by state essentially?

Suggestions welcomed. Thanks.

-BH

6 REPLIES 6

didier_thill
6,644 Views

Hi Bryan

It depends,

Do you want more usable disk space or more load balancing?

Maximizing disk space by creating an aggregate using all disks from the shelf will give you 11 data disks (11D+2P+1S) but this controller will be much more used than the other one.You will not be able to spread the load among the controllers.

Creating an aggregate per controller will give you 2* 4 data disks (4D+2P+1S) , less usable  space but you will be able to balance IO through both controller ( datastore1 on ctl1 : Active-Passive , datastore2 on ctl2 : Passive-Active , aso)

the 'cluster' license is however needed in both case.

bheusmann
6,644 Views

Thanks for the information.

I'm not hugely concerned about load balancing. I was leaning more towards maximizing space. I've never configured any of my NetApps in a Cluster configuration. The ones I have in my datacenter supporting a Testing Lab are single controller units mostly serving just one or two LUNS for VMWare ESX hosts. So the cluster is a tad bit new to me. I have a 'cluster' license.

In scenario 1 mentioned above maximizing space, would controller B take over in the event controller A died or malfunctioned, servicing it's data?

Scenario 2, load balancing, this would allow 2 different LUN's to be offered to the hosts for 2 separate aggregates, or would the data be the same on both aggregates?

Once decided and configured aggregates, I would need to figure out what interfaces would be assigned what IP's and what VLAN on the core switch, then cluster configuration I'm assuming....which I would appreciate ay help on.

Thanks.

didier_thill
6,644 Views

Bryan,

For scenario 1 , yes , of course , controller B would take over controller A in case of a failure ( or an upgrade). This requires you to keep the current disk configuration of this controller ( Controller B root is installed on 3 drives with 1 spare)

For scenario 2 , the data will not be the same :

example :  esx_datastore_os_1 on lun_os_1 on vol_os_1  on aggr1 on ctl1 and esx_datastore_os_2 on lun_os_2 on vol_os_2 on aggr1 on ctl2.

NetApp controllers do not share aggregates, but they are able to see each others disks in case of a take-over.

HEUSMANNBM
6,644 Views

Thanks, explains it perfectly, and I'm glad my gut assumption was correct I think in this situation scenario 1 is the best solution. I will leave Controller B root installed on the 3 drives and 1 spare as well as Controller A root installed on the first 3 drives and 1 spare, leaving the shelf as the data store.

Is there any special hints or recommendations you can give on easing the cluster setup? As I said I have never configured NetApp clustering, and as the scenario 1 above, will be my first venture in having a secondary spare controller for takeover in the event of a failure of A.

Thanks.

-BH

martin_fisher
6,644 Views

Hi Bryan - How many Interfaces does you current setup have? If you are not bothered about redundancy, then you would realistically only need 1 interface for management and 1 for ESX/VMWare connectivity (via ISCSI i imagine, if you are using only an ethernet fabric). You would need 4x interfaces, 2x on each controller/appliance/head.

The "RC" file which holds all the boot up configuration would also need configuring for each appliance, specifiying the Interface configuration, ip address, gateway etc and its partner interface, so in a HA pair, in the event of issue, the other appliance can take over.

martin

HEUSMANNBM
6,644 Views

Hi Martin,

Currently I have 4 eth. interfaces on each controller A/B for a total of 8. I have a Cisco 3750G core switch, with separate VLAN's for iSCSI, Management, and Int. LAN/VM traffic. Would 2 Management / 2 iSCSI be recommended? I do not have SAN Brocade for this setup, only ethernet fabric as you mentioned. I do have NFS license also, but have to this point only used iSCSI on the ESX hosts in the lab environment.

-BH

Public