ONTAP Discussions

snapmirror reverse resync

RATNATHURAI
7,735 Views
  1. My intention was to snapmirror  volumes from A-->B then reverse resync (B-->A).  I have created snapmirror relations for two  volumes from source to detonation (A--->B) .Broke the mirror and mounted the lun on that volumes to two servers in site B. In site B, one server uses snapdrive and other server uses MS iscsi initator. I noticed that the snapmirror relation changed from broke-off to uninitialized state on the volume which host LUN for snapdrive. But the volume which host lun for ms iscsi initiator is still on broke-off state,  I was expecting on sanpdrive volume snapmirror relation to be broken-off so I can reverse reresync using oncomand system manger. I am wondering why snapdrive volume changed the state from broken-off to uninitialized state?
4 REPLIES 4

aborzenkov
7,735 Views

I wonder - was it yours question just recently? As was determined, SnapDrive reverts volume to consistent snapshot which deletes base SnapMirror snapshot. As long as some other common snapshot still exists, it should be possible to resync though.

RATNATHURAI
7,735 Views

Yes it is related to recent post. I am still working on it. I think that snapdrive is to mount the lun to a consistent snapshot and deletes others, this is normal behavior of snapdrive . I noticed that It still  has snapdrive common snapshot which was created by snapdrive, but it goes to initialize state instead of staying on broken-off state. Ms iscsi initiator does not delete any snapshot since it does not care whether it is consistent snapshot or not.

RATNATHURAI
7,735 Views

I am using single lun in a volume , the link below state that if we use single lun in a volume then snapmirror relationship will be uninitialized. Is there any work around to have resync option to be in system manager?

https://communities.netapp.com/community/netapp-blogs/msenviro/blog/2011/04/24/using-snapmanager-for-hyper-v-smhv-for-disaster-recovery

NetApp_SEAL
7,735 Views

I just completed a project that required the migration of several volumes (containing LUNs) to be migrated from a source site to the DR site, and the volumes at the DR site would become the "new source" side. Volumes contained LUNs (SQL) that were mounted to VM guests via in-guest iSCSI. SnapDrive, NetApp DSM, HUK, etc... were all installed on guest VMs.

The volumes all had existing SnapMirror relationships from source to destination. When the time came to migrate (we leveraged SRM for migrations of the VMs themselves to the new source site), we performed the following steps (NetApp tasks using System Manager, mind you. Best to use CLI and script if for multiple LUNs):

(Note - I understand your scenario is different. Just want to use this as an example)

1) Create respective iGroup(s) and LUN mappings at destination site (can do this with current SnapMirror destinations)

2) Perform final SnapMirror update from current source to destination

3) Quiesce SnapMirror relationship

4) Break current SnapMirror relationship

5) Migrate VM (in this case - skip otherwise)

6) Establish iSCSI session from guest VM to new target IQN

7) Assuming LUN IDs are correct, rescan disks and they will come up the same as was previously prior to migration

😎 Delete former iSCSI session (to former site IQN)

9) Reboot VM to validate persistence

10) In System Manager, right-click former SnapMirror relationship (that is still in a "broken off" state). Select "Reverse Resync". This creates a SECOND set of SnapMirror relationships (but leverages the former source volume and its data for replication)

11) Once new SnapMirror relationship has completed its replication, delete former (still "broken off") SnapMirror relationship

That in mind - it's important to ensure that no snapshots get deleted in this process on either end (especially the base snapshot), otherwise, you could end up having to completely re-initialize the SnapMirror relationship from the "new source" site to the "new DR" site. I've done this by accident before on a 20 TB CIFS volume. Trust me...it sucks (thankfully there was a 10 Gb link between sites).

We did this for several servers over the course of a migration that spanned several months. We never experienced an issue with a "single-LUN-per-volume" configuration (which is what best practice is anyway, especially for SQL. A little different with Exchange). Databases came up clean on the "new source" site and there was never any data corruption.

If you used SnapDrive to mount the new LUN and it deleted other snapshots, it sounds like the SnapMirror base snapshot might have gotten deleted? Perhaps? If so, this would result in the "uninitialized" SnapMirror state after an update or re-sync would have been attempted (SnapMirror would fail then reflect the state of "uninitialized" since it couldn't find a common base snapshot).

How much data are you talking? Over what speed link? Is is an issue with just creating a new SnapMirror relationship and letting it do it's thing (pointing to a new destination volume and leaving the former source volume around for "safe keeping" until any retention windows have passed)?

I would recommend that if you have to perform the task again at all, try the steps I noted above (if you wish) vs. using SnapDrive.

Hope this helps!

-Trey

Public