Yes, you have to stop applications before creating clone. Actually, I'd remove shares before creating clones as well to make sure noone can access these folders. After clone is created, recreate shares again pointing to new volume.
... View more
You could start with Snap Manager for Exchange documentation ☺ specifically with disaster recovery chapter: https://now.netapp.com/NOW/knowledge/docs/SnapManager/relsme602/html/software/admin/GUID-141E8E86-AB9F-4585-87C2-69D1AC4EA0FE.html Current version of SME/Exchange automate many tasks that had to be done manually before; you could review this KB (that exactly how I have done it in the past): https://kb.netapp.com/support/index?page=content&id=1011643. This KB also contains some reference to Microsoft documentation. And of course Microsoft has detailed guides about Exchange disaster recovery planning and implementation. Data replication is just a tiny (albeit quite important) part of it.
... View more
Exactly. Please do not forget that replicating (consistent) data is just a part of overall disaster recovery setup. There is quite a bit to be done on Microsoft side as well to be able to run Exchange on DR site.
... View more
That’s not going to work, at least as simple as you expect it. Either you have to use consistency group API to create write-order consistent snapshots across three volumes (e.g. utilizing SnapCreator) or you need to be quite careful in which order you create snapshots and do manual recover of database on remote side. The simplest way for you would probably be Snap Manager for Exchange that integrates with SnapMirror and automates database recovery on remote side.
... View more
I would expect Data ONTAP to panic due to multiple disk failures in aggregate. The second plex cannot be used because it is stale. So you suddenly lost your aggregate. P.S. just tested in simulator and it panics indeed. I would be greatly surprised if anything else happened
... View more
Every snapshot is always for the whole volume. While it is possible to restore individual files from snapshots, it is quite slow process and to have full benefits you have to separate different databases into different volume(s). This will enable you to do volume level snap restore which is almost instantaneous for any data size. There are additional requirements how various parts of database (data files, logs, etc) should be distributed across different volumes, they are documented in respective Snap Manager manuals. SM SAP cares only about database. You have to use some different method to back up other parts (/sapmnt, /usr/sap etc). Snap Manager products do not use snapshots done by external means or for volulme not containing databases. You have to manage them yourself.
... View more
Are you trying to mark each file individually and recover it? With such large number of files I’d really rather try to recover on save set level, without going into each file separately. Or at least mark small number of top level directories.
... View more
The obvious checks – both ports of each multivif are connected to the same switch (or switch stack) and switch ports are properly configured for static link aggregation; all four ports on switches are configured in the same VLAN. Addresses look very strange. 169.41.6.128 in a network with a mask 255.255.255.192 means host bits are all zero, which is not possible for a valid address. There could also be routing issues. Please show output of “netstat -rn”.
... View more
Similar won't work, perfstat.exe on windows expects plink.exe for SSH access. Actually, plink.exe is all that is needed, but it alone does not support passwordless authentication on filer. To setup public key authentication puttygen.exe is needed additionally. See https://kb.netapp.com/support/index?page=content&id=2011414 (and article it refers to) for detailed examples.
... View more
The link is working but it is accessible to partners only. There is nothing in this thread that warrants restricted access, it would be nice if admin could move it to some public place.
... View more
I'm afraid I run out of ideas for the moment ... Could there possibly be a setting i'm missing that allows me to directly fibre attach the tapes so that it detect the tape drive and the media changer lun, as I could then test without the switch involved. Does your library offer FC interface mode confiuration (usuall AL/Arbitrated Loop and Fabric/Point-to-Point)? In this case it has to be set to Arbitrated Loop for direct connection.
... View more
With these versions of Data ONTAP and ESX it is actually recommended to use ALUA which does not require setting of preferred path anymore.
... View more
Not using “snap restore”. You can either ndmpcopy it on NetApp or just access snapshot using CIFS/NFS (whatever is available) and copy file using client.
... View more
Use ndmpcopy to copy data from snapshot to another volume. You can also use “vol copy”, it will additionally transfer all snapshots earlier than selected.
... View more
You have to give (as mountpoint) exact path to folder you want to share. Folder must exist, filer does not create it for you. You can either create qtree on filer or share top level volume (that you effectively have done) and create directories below from (Windows) client.
... View more
When you say “replication” do you mean SnapMirror? If yes, then 1. Yes in general. There are some limitations when using VSM (cannot replicate to lower Data ONTAP version, cannot replicate between 32 and 64 bit aggregates) 2. Yes, sync and async SnapMirror are supported. I am not sure I understand the third one. Async SnapMirror works by creating snapshot on source and transferring delta between it and previous snapshot. May be is answers your question. Hmm … yes, there are storage systems that can do async replication without explicitly or implicitly creating snapshots. If this is what you mean, then async SnapMirror is snapshot replication ☺
... View more
1 hour later site A goes down. Site B has node majority together with FSW. Q2. How OCSR/WFSC react in this case? Does any automatic failover happen? In this case Windows will failover applications to site B, and OCSR will failover storage to the controller at site B. That is exactly what I do not buy. Effectively this means that customer applications all of a sudden lost 1 hour worth of data without customer even knowing it or having possibility to intervene. Having data available at another site is of little help here as soon as at least one transaction based on stale information takes place.
... View more
It was me who asked this question Unfortunately I am still confused. Let's start with your configuration above. We have MetroCluster split brain. Both FC lines between sites are broken. On WFSC level everything is green - network works, heartbit bits, all 5 nodes know about each other. All applications continue to run (because both heads are still serving data each from own plexes), Q1. How OCSR/WFSC react in this case? Does any automatic failover happen? 1 hour later site A goes down. Site B has node majority together with FSW. Q2. How OCSR/WFSC react in this case? Does any automatic failover happen?
... View more
Chown is generally restricted to root only for security reasons. It happens even before kernel knows about type of file system involved. For chmod I am not sure. Usually I do not use CIFS between two systems that both can use NFS ☺
... View more
how do I go with resync in the direction of DR to Primary, and force to override primary updates with DR updates on pri_vol? You just do snapmirror resync on primary specifying DR as source.
... View more