hi i have strange behaviour on my ha pair when i run snap::> disk show -disk 4.11.* -fields aggregate disk aggregate ------- --------------------- 4.11.0 aggr0 4.11.1 aggr0 4.11.2 aggr0 4.11.3 aggr_snap_n02_sata_02 4.11.4 aggr_snap_n02_sata_02 4.11.5 aggr_snap_n02_sata_02 4.11.6 aggr_snap_n02_sata_02 4.11.7 aggr_snap_n02_sata_02 4.11.8 aggr_snap_n02_sata_02 4.11.9 aggr_snap_n02_sata_02 4.11.10 aggr_snap_02_system 4.11.11 - 12 entries were displayed. But snap::> aggr show -aggregate aggr0 Error: show failed: Aggregate "aggr0" does not exist. but now i can't use this disk attached on this "gohst" aggreagte. how can i remove the ownership of gthis disk to aggr0?
... View more
We utilize an authentication source that produces passwd and netgroup files for unix based authentication. We are currently manually importing the files via the following commands from the ontapp console: vserver services name-service unix-user load-from-uri -vserver <vserver> -uri <path to file> vserver services name-service unix-group load-from-uri -vserver <vserver> -uri <path to file> I would like to automate this process in some form, but so far I have not been able to comeup with a solution. Looking for any thoughts. REST, local scripts that can be scheduled...etc.....
... View more
Hi, What is the preferred method of performing a Snapmirror Reverse Resync after a DR scenario where we want to keep the data that has been written to the volumes? I have two Clusters - Cluster1 in the Primary DC and Cluster2 in the Secondary DC where under normal circumstances, the volumes on Cluster2 are the Snapmirror Destinations. In a DR scenario all Snapmirror volumes in Cluster2 are now Broken-Off, read /write and we have written data to the volumes in Cluster2. I've read the following process on https://www.flackbox.com/netapp-snapmirror-data-protection (possibly the old way to do it) where the snapmirror is deleted from Cluster2, recreated on Cluster1 with Cluster2 as the source and Cluster1 as the destination and then resynced. Once the sync is done, the Snapmirror is deleted from Cluster1 and then recreated back on Cluster2, resulting in Cluster2 now being the Snapmirror destination. Is this no longer the recommended method to follow? The second process that I have read is once the Snapmirror is broken-off on Cluster2, a Snapmirror resync is performed on Cluster1 making Cluster2 the Source and Cluster1 the destination. The process is discussed here: https://vmstorageguy.wordpress.com/2018/01/07/how-to-netapp-ontap-reverse-snapmirror-cdot/ Is this now the recommended way to run the reverse resync? I am aware that the resync can be done in System Manager, but will be performing via the CLI. Thanks, Ben
... View more
I have frequently used the NetApp Docs tool over the past few years to check our FAS and AFF systems. However, today I noticed that I could no longer find a new version of the tool in the Tools & Security section. Therefore, my question is: Is it still advisable to use this tool, or will it eventually become obsolete? I find the tool very practical and would miss it if it were no longer available. Are there any alternatives that you might recommend?
... View more
I just recently upgraded from 8.3.2 to 9.1P20 and started to see these alert messages being sent.
Message: cpeer.addr.warn.host: Address 172.31.x.x is not any of the addresses that peer cluster <vserver> considers valid for the cluster peer relationship; Details: An introductory RPC to the peer address "172.31.x.x" failed to connect: RPC: Remote system error - No route to host. Verify that the peer address is correct and try again.
Description: This message occurs when a stable IP address for the peer cluster is no longer valid, making the peer cluster unreachable and causing cross-cluster operations to fail.
I'm sure there was a change in the upgrade, however how do I narrow down what's causing this issue?
... View more