VMware Solutions Discussions

Active/active newbie

fajarpri2
3,820 Views

Hi all,

I've IBM N3600 with dual controller.

Ontap 7.3.3

I've been reading the active/active configuration and the doc assumes that the config is with 2 filers.

My questions:

1. Am I correct to assume that we can setup active/active cluster with only 1 filer with 2 controllers, so that if one controller fails the other one can take over.

2. To setup active/active, does it mean I have to setup 1 pool for each controller that the amount of storage space is basically reduced by half? Or I'm wrong here?

3. What is the summary of steps of setting up the active/active on my case? Some example of commands are appreciated ^^

Thank you so much for the help.

1 ACCEPTED SOLUTION

martin_fisher
3,820 Views

hi fajarpri2

You could in theory have a 2 appliances/controller using 1 shelf of storage as most new filers and new versions of ONTAP used software based based ownership for the disks, an alternative to hardward based. You could in theory split a shelf of 14 disks, with probably 12 being used for aggregates (6 and 6) and 2 spares. If you used raid-dp though you would only have 4 data disks per aggregate, so the usable space could be quite small!.

if you have setup your cluster and want to test the failover, you can use 'cf takover' or alternatively to simulate a more realistic scenario, connect to the console of each filer, and then enter the advanced mode and issue the panic command.

i.e

"priv set advanced"

then enter "panic"

Issue the panic command to the storage system you want to panic and be taken over by its partner.

Good Luck

Martin

View solution in original post

4 REPLIES 4

fajarpri2
3,820 Views

OK, I am amused here.

After reading the aaconfig three times.... my conclusion is:

1. The way the doc is arranged, it is prone to be very confusing for newbie!

It talks about MetroCluster without giving clear indication what to do if we just want a SINGLE filer with two controllers. Well, actually it talks about it, but it's only in one small paragraph, hidden in between SUPER COOL features which I don't use.

2. However, I'm happy to say, with the help of that one paragraph and a little imagination, it turns out that the setting of the active/active cluster in the filer is pretty straighforward for my case. Thanks to IBM and NetApp for this.

fajarpri2
3,820 Views

Fri Jan 21 17:52:12 SGT [n3600a: cf.fsm.takeoverOfPartnerEnabled:notice]: Cluster monitor: takeover of n3600b enabled
Fri Jan 21 17:52:12 SGT [n3600a: cf.fsm.takeoverByPartnerEnabled:notice]: Cluster monitor: takeover of n3600a by n3600b enabled

Now that clustering is setup, I wonder how to test it.

1. Setup an iSCSI LUN on n3600a node, serve it to an ESXi

2. Try to halt n3600a node and see if n3600b takes over? And see if the iSCSI LUN is not interupted?

Thanks for any help

virtuastor
3,820 Views

I would do a controlled test failover using cf takeover and monitoring the server using iSCSI LUNs.

Probably good to get a bit of activity going on the server to see the impact.This can just be a simple copy job from local disk to the iSCSI mapped LUN during the takeover and observe what happens on the server.

Have the Filer CLI console open so you can see the messages in realtime as the takeover happens.

You will see some iSCSI initiator session establishing following the takeover.

Ensure you have installed the Host Utilities Kit on the server which will set all of the optimal settings including disk timeout settings on the server. This is critical as if the setting is incorrect (or set too low i.e. 60 seconds) and the filer takeover takes 62 seconds, then you could lose access to the LUN.

The Host Utilities Kit is a free download from the NOW website.

martin_fisher
3,821 Views

hi fajarpri2

You could in theory have a 2 appliances/controller using 1 shelf of storage as most new filers and new versions of ONTAP used software based based ownership for the disks, an alternative to hardward based. You could in theory split a shelf of 14 disks, with probably 12 being used for aggregates (6 and 6) and 2 spares. If you used raid-dp though you would only have 4 data disks per aggregate, so the usable space could be quite small!.

if you have setup your cluster and want to test the failover, you can use 'cf takover' or alternatively to simulate a more realistic scenario, connect to the console of each filer, and then enter the advanced mode and issue the panic command.

i.e

"priv set advanced"

then enter "panic"

Issue the panic command to the storage system you want to panic and be taken over by its partner.

Good Luck

Martin

Public