ONTAP Discussions

Need to destroy aggr containing snaplock compliance volume in DR Environment

jkoelker01
7,587 Views

Here's the scenario. This snaplock compliance volume and containing aggregate are on my Disaster Recovery SAN. We currently have our production SAN snapmirroring it's snaplocked volume to this DR one. Worked fine until we initiated a DR Test. I took all the snapmirrored volumes out of snapmirror and then put them into production mode for us to test against. I was unaware that once you stop snapmirroring of that Snaplock Compliance Volume that you can no longer resume it after the fact or catch it up. I am in a bind now trying to find out my options. I have extra disks here that I can put into the FAS2240-2 device. Here's my thought. What will the system do if I simply take those disks that are within that snaplocked aggregate out? Will I then be able to delete the aggr since the disks aren't there? Can I then put new disks in and setup the DR aggregate again? The new disks won't be snaplocked correct? Anyways, I do appreciate any help you can provide as I am in a bind now. Thanks.  System Information FAS2240-2 with 8.1.1 7-mode

1 ACCEPTED SOLUTION

JGPSHNTAP
7,587 Views

Ok, first things first.. Are you using SME with exchange?   What version of exchange are you running?   Im just trying to figure out your DR test. 

The best way that i've gotten rid of a snaplock aggregate is pull all the disks that are associated with the aggregate.  Make sure you disable clustering b/c once the aggregate is in failed state it will try to fail over.  You will need to reboot the system to fully remove the aggregate.

View solution in original post

5 REPLIES 5

JGPSHNTAP
7,588 Views

Ok, first things first.. Are you using SME with exchange?   What version of exchange are you running?   Im just trying to figure out your DR test. 

The best way that i've gotten rid of a snaplock aggregate is pull all the disks that are associated with the aggregate.  Make sure you disable clustering b/c once the aggregate is in failed state it will try to fail over.  You will need to reboot the system to fully remove the aggregate.

jkoelker01
7,587 Views

No I am not using SME with exchange. This aggregate is on a separate controller any only has our imaging files on it for records purposes. All of my other NFS traffic for my vmware servers, cifs, etc.. is on the other controller and separate aggregates, volumes. So once I pull the disks associated with that aggregate that has snaplock compliance on it and reboot will it then allow me to destroy that aggregate without any repercussions? I've never done anything like this so I'm just gathering as much knowledge from others who have done this. Thanks again for replying. I really appreciate it.

JGPSHNTAP
7,587 Views

No, the beauty of this is once you pull the disks the entire aggregate disappears.. Don't have to worry about destroying it.. IT's gone..

Then you put those disks aside b/c they can never be used again

jkoelker01
7,587 Views

Well, I am that much closer because of your instructions. The problem I am running into right now is the controller thinks that those disks which I took out and replaced are owned by the other controller but when I go to the other controller it thinks they are owned by the other controller. So really each controller feels the other owns it. I tried doing an disk assign -s unowned 0a.00.23 and disk assign -s unowned 0b.00.23 on both controllers but always get "disk assign: Disk 0a.00.23 is not owned by this node." or "disk assign: Disk 0b.00.23 is not owned by this node." depending on which controller I am on. How do I get around this to make the controllers realize that no controller currently has the 5 disks. I really appreciate your help. Thanks.

jkoelker01
7,587 Views

Well I figured that out. I was able to do disk assign 0a.00.23 -s unowned -f and then was assign the disk without issues. Thanks for all the help.

Public