2011-11-21 01:23 PM
I got one problem on a NetApp filer. I cannot seem to delete/release some old relationship. The destination volume no longer exist.
I've tried the following
-Delete all snapshots in the volume related to SnapMirror
-Release the volume from the filerview but the action is grayed out since it is a source volume.
-Do a snapmirror release on all the volumes without success. Keeps on syaing the volume doesn't exist, restricted or offline.
We are running Data Ontap 8.0.1P4 7-Mode. Any ideas ?
Thanks in advance!
PS.: I've attached a screenshot of what it looks like in the filerview as well.
PPS.: The destination filer doesn't have any of those volumes since they were deleted.
|nastorage3:/vol/vol_P_SAS_DS003/- nastorage2:/vol/VMwareBackup_backup_15/VMwareBackup_nastorage3_vol_P_SAS_DS003 Source||-||Idle|
|nastorage3:/vol/vol_P_SAS_DS004/- nastorage2:/vol/VMwareBackup_backup_16/VMwareBackup_nastorage3_vol_P_SAS_DS004 Source||-||Idle|
2011-11-21 02:02 PM
Did you edit /etc/snapmirror.conf on the target to remove the entry? That is the only thing besides snapshots that can cause a status to show... I did run into a rare burt a while ago where we had a registry entry and support had to talk the customer through it... I will see if I can find that KB/BURT on that issue. In that case no snaps and no conf file and no target volume and it still showed until the registry was modified to remove the entries.
2011-11-21 02:19 PM
I couldn't find the burt but here is a similar one # 195220. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=195220 don't modify anything in the registry without support on the phone... ideally with a webex. If you don't have snapmirror snapshots and don't have snapmirror.conf entries then it may be an inconsistency in "registry walk state.snapmirror.status"
2011-11-21 03:31 PM
I am new so please forgive in advance any group etiquette I may violate. I recall a similar situation and have two thoughts. In my situation I deleted the destination volume before breaking the relationship. After trying a few things I recreated the volume and then broke the relationship. Then blew the volume away. Also interested to know (apologize if I missed it) what error if any you get when you do sm break command.
2011-11-21 05:11 PM
That is a good idea to create the target volume... in the case we had the target system didn't exist anymore... it might work if you recreate the volume and re-establish then break the mirror.
2011-11-24 09:28 AM
Thank you for your answers, I did try to recreate the volume without any success. There is no entry in snapmirror.conf and sm break doesn't do anything and states the volume doesn't exists....
I'll look at the link in your post and see if it can help. I guess I'll have to involve NetApp support team on this one.
2011-11-24 10:27 AM
It sounds like the registry issue. Make sure to work with support on it... It requires a reboot and edit of the registry. priv set advanced ; registry walk status.snapmirror will show the entries.
2011-11-25 01:45 PM
Ok. A couple of additional thoughts. I am assuming since you created a new destination that the destination volume still exists. If that is the case are you able to do a successful sm update? If that relationship is working that suggests SM is still in plan. Also have you tried offlining the target volume? As I said I saw this one time and the key was having the relationship between the target/dest working in some fashion.
You also indicated that the dest filer is no longer operational. Are you able to reestablish the mirror with the same dest name but on a different filer?
2011-11-25 02:35 PM
Did support get back on this? I don't think recreating the relationship will fix it. There are no snaps or Conf file entries to make it report except for the orphaned registry status. Definitely worth a try to see if you can recreate and clear it but doesn't sound like it will work in this current state.
2011-11-29 02:14 AM
Have you tried running sm status from the source to see if the relationship still exists there or has it been removed ?
Jumping into the cli on the destination, can you double check that the destination volume does not exist when you type vol status ? Also looking at your sm config above it is definately a volume that you are snapmirroring to and not a qtree right ?