ONTAP Discussions

How to remove SV relationshp whose volume has already destroyed

netappmagic
5,885 Views

The SV relationship of a qtree under this volume is still there, but the volume on both side has already destroyed. What are steps to remove the sale relationship.

Appreciate your detailed steps! Thank you!

8 REPLIES 8

JGPSHNTAP
5,885 Views

Interesting.. so someone didn't follow the right process

Try typing

snapvault stop

netappmagic
5,885 Views

I have already tried "snapvault stop qtreename" on the destination, and prompts me the following message. However, it did  not do anything, hanging there forever. Then still "idle" on the status.

The secondary qtree will be deleted.

Further incremental updates will be impossible.

Data already stored in snapshots will not be deleted.

This may take a long time to complete.

Are you sure you want to do this? y

I also tried "snapvault release" on SRC, but, it gives me the message of saying that the destiatnion doesn't exist.

Also, the volume got destroyed on the SRC, and still exists on DEST.

ANYTHING ELSE i CAN TRY, PLEASE?

JGPSHNTAP
5,885 Views

Do you still need the DST volume?  IF not offline/destroy it.

netappmagic
5,885 Views

I need to keep DST volume for a month long.

It turns out that "stop" command took about 12 hours to complete.

cscott
5,885 Views

I have had snapvault stops that take 5+ hours.  If this is a large volume or a volume with high file counts, or the trifecta - A large volume(700GB+), with a high file count, and many small files(high inode count) the process is very slow.  We have to connect through the RLM/SP and change the timeout value and just let it run for some volumes.

- Scott

DRUMDUDESAN
5,885 Views

Hi,

I am curious what is your file count on the said volumes? ( use NetApp Powershell Get-Navol to get files used)

Is this a JAVA development shop? I have seen JAVA shops take down the best of file systems as I see millions of small files and folders. It is the nature of JAVA's organizational technique for preventing namespace conflicts and maybe other reasons. Never the less it has always been problematic when accessing it from Windows based files systems and to boot Windows file systems will always have anti-virus installed which adds more overhead on this structure. I always recommend ext3 or jfs2 file systems for JAVA programming. If I recall the JAVA class file systems is the worst culprit. Many many tiny 1-4kb files and many many folders; nested n the thousands.

If this is not your scenario and you are mainly dealing with office documents then that should not be the case.

Of course there is more to this than meets the eye your underlying architecture etc mainly focusing on the IOPS may need to be evaluated.

My 2 Cents

Jeff

DRUMDUDESAN
5,885 Views

Hi,

Wait a minute snapvault is block based. There is something else at play that is causing this huge purge time in snapvault stop. But 5-12 hours that is odd.

Can you describe your systems and environment post a sysconfig. ex: FC, iSCSI, drives number of drives etc...

" stop [ -f ] secondary_qtree

Available    on    the    secondary    only. Unconfigures the qtree so there will be no more updates of the qtree and then deletes the qtree from the active file system. The deletion of the qtree can take a long time for large qtrees and the command blocks until the deletion is complete. The qtree is not deleted from snapshots that already exist on the secondary. However, after the deletion, the qtree will not appear in any future snapshots. To keep the qtree indefinitely, but stop updates to the qtree, use the snapvault modify -t 0 command to set the tries for the qtree to 0. "

Jeff

cscott
5,885 Views

Snapvault is NDMP that transfers blocks(aka inodes), meaning it has to do an inode walk before it can do anything with the volume.  A stop requires an inode walk to process each file for deletion it is processing the block map, so again inodes are in play.  Running snapvault stop is actually deleting the files and qtrees in the destination, so we have to read the entire bitmap, many large files reads quickly, many small files reads more slowly. 

NDMP is extremely sensitive to inode/file count sizes, this is not uncommon in high file count environments at all.  I have 700G volumes with inodes counts that are at 90%, if you try to initialize a vault on a volume that big with inodes that high, the inode walk talks almost five times the actual time to transfer data.  If you were to watch an initialize on a volume like this it constantly goes from XMB transferred to xxxxx of xxxxxx inodes transferred.

These issues are the basis of why the new Snapvault engine in cDOT is so much faster and more efficient.

- Scott

Public