Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
Hi,
We've just migrated from an old FAS 3020 to our new FAS 3140. Data synchronisation was done with snapmirror. On the new filer we have disabled snapmirror with "snapmirror off" and broke the relationships for each volume with "snapmirror break" to get a read/writeable volume. The old FAS 3020 was shut down and the new FAS 3140 got the same name and IP address.
Now "snapmirror status" shows all the mirrored volumes to be "broken-off" with source and destination on the same filer name:
adnnfs03> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
adnnfs03:app adnnfs03:app Broken-off 01:42:37 Idle
adnnfs03:aww adnnfs03:aww Broken-off 01:42:31 Idle
adnnfs03:cvs adnnfs03:cvs Broken-off 01:42:26 Idle
adnnfs03:doc adnnfs03:doc Broken-off 01:42:22 Idle
adnnfs03:home adnnfs03:home Broken-off 01:42:18 Idle
adnnfs03:install adnnfs03:install Broken-off 01:42:06 Idle
adnnfs03:java adnnfs03:java Broken-off 01:41:58 Idle
adnnfs03:nightly adnnfs03:nightly Broken-off 01:41:49 Idle
adnnfs03:nightlybuild adnnfs03:nightlybuild Broken-off 01:41:45 Idle
adnnfs03:pkg adnnfs03:pkg Broken-off 01:41:38 Idle
adnnfs03:private adnnfs03:private Broken-off 01:41:31 Idle
adnnfs03:transfer adnnfs03:transfer Broken-off 01:40:28 Idle
adnnfs03:xpository adnnfs03:xpository Broken-off 01:40:36 Idle
How can one get rid of these entries? I already removed the schedules from snapmirror.conf. The "snapmirror release" command says this:
adnnfs03> snapmirror release app adnnfs03:app
snapmirror release: app adnnfs03:app: No release-able destination found that matches those parameters. Use 'snapmirror destinations' to see a list of release-able destinations.
Maybe I should have this done before breaking snapmirror and shutting down the new filer.The migrated volumes are online and read/writeable.
Thanks in advance for help.
Best regards,
Bernd Nies
Welcome to the forum.
If it is the issue I think it is, all you have to do is delete the snapshots which were used to snapmirror the data. They should have snapmirror listed in their status.
Brendon
Yes, delete the base snapshots and remove all entries in /etc/snapmirror.conf which still reference the old relationship.
NetApp should really do something about this, like adding a new "snapmirror destroy" command which does that, because if you have multiple relationships from one volume to others (or even syncs in both direction after a snapmirror resync from dest to source) it gets complicated easily.
The easiest way of determining the correct snapshot to delete is by doing a manual "snapmirror update" and deleting all but the newest base snapshot on the affected volumes
-Michael
Hi,
Thanks for your answers. I found it out by coincidence after I have deleted the old snapshots.
Best regards,
Bernd
Hey
the "snapmirror release" command will do this. you run it on the source filer and it does all the clean up.
cheers
shane
Hi. I had to do a "snapmirror abort -h <snapshot_name>" to solve this.
Hard time I had but I'm glad I did this.
for the future - Oncommand System Manager does job very well, it is nice and clean in all "places".
I've also noticed that the Powershell commandlets for snapmirror work better in these situalations as well. In perticular the remove-nasnapmirror commandlet. It actually gave me an error saying the snapmirror relationship didn't exist but it cleared it out of the "snapmirror status" list which was the result I was looking for. It also gives you more usefull information for the snapmirror destinations with the get-nasnapmirrordestination. It tells you which snapshot is maintaining the snapmirror relationship.