Hi All, I have a NetApp FAS3240 metrocluster in our production and I'm looking to buy another NetApp storage for expanding our total capacity. I'm looking for more information about the exact operational details occuring during a takeover/failover in the metro cluster and what happens on a technical level to plan further regarding our OpenVMS setup, failover and the a new storage. Can anyone help me find some documentation of how HA/failover works? The more information the better. One of the things I want to know is how a controller takes over from the other? Multipathing, vifs? Is it just a multipath failover or are the controllers using virtual interfaces so they can take over the partner controller mac address? Can a controller serve data from the failed partner controller disks directly (it has path to those disks) or will it always be using the local partner mirror? I've tried searching for informations and I've tried NetApp university, but haven't been able to find anything sufficient to my needs. Hope you can help and thank you very much. Have a nice day. Regards, Nicolai
... View more
Hi All, I was wondering what the best place is to have Data Ontap 8. At the moment the Data Ontap vol is placed in an aggr with other volumes. Would it be better to place Data Ontap vol in its own small aggr? The immediate reason is that Im going to destroy the same aggr to make new aggr so the Data Ontap vol is going to be moved/destroyed and for future work I was thinking if it wasnt better if Data Ontap vol were placed in its own aggr to avoid having to either move it or destroy it? I would appreciate either a comment or a reference to where I can read more? At the same time I need to move Data Ontap from the aggr I need to destroy. Can I move it to another aggr and make Data Ontap boot from there? Then create the new aggr and move Data Ontap back? Is that the easiest way? Again I would appreciate a comment or a reference to where I can read more? Thank you Regards, Nicolai
... View more
Hi All, I have a FAS3240 metro cluster which performed a level 2 watchdog reset causing panic takeover of node A. System rebooted and I made the system giveback which resulted in the system up and running again. The watchdog reset was a single incident so by now no worries. However, I want to do a system diagnostic on node A, but if I do a takeover of node A I cant access that specific node from SSH. So cant I boot diagnostics on node A without having to shut the whole system down? If I reboot node A will node B automatically takeover while node A reboots and at the same time node A being reachable from SSH? Any suggestion? Thank you Regards, Nicolai
... View more
Hi all, Reading this from another post; Shared Space boundary - All volumes in an aggregate share the hard drives in that aggregate. there is no way to prevent the volumes in an aggregate from mixing their data on the same drives. i ran into a problem at one customer that, due to regulatory concerns, couldn't have data type A mixed with data type B. the only way to achieve this is to have two aggregates. makes me confused what the purpose of raidgroups are? I thought raidgroups was the boundary for volumes, but if volumes are mixing their data across disks in an aggregate - despite having two raidgroups - what is the exact purpose of a raidgroup then? In my case my concern is that I want to create a disk boundary between two volumes where one volume is used for Database data and the other volume for Application data to achieve that a service searching through a database - heavy use on disk - and a service using application data do not use the same physical drive at the same time which I assume will increase latency? Thank you Regards, Nicolai
... View more