ONTAP Discussions

moving a snapvault qtree to a new volume with free space



we have two 3140 filers with Ontap 8.0 in 7-Mode, one for the data and one for storing the snapvault backups. We have large sets of data, so we created a couple of 16 TB flex volumes with 3 qtrees/shares in each volume. I'm sure that sooner or later we'll need to move a qtree on the backup filer to a new volume with more free space. Our retention policy is to keep 90 daily snaps on the backup filer.

What's the best way to move a snapvault qtree from one volume to another? I've seen different solutions like establishing a new snapvault backup into a new volume and just keep the old snaps in the old volume. In the knowledgebase I found an interesting acrticle, but it needs snapmirror.

How to move a destination Qtree SnapMirror or SnapVault from one volume to another without requiring a new baseline transfer between primary and secon...


What's be easiest way to move the snapvault qtree and keep the existing snapshots?



Use the Secondary Space Management Feature of DFM 4.0.

It will help you move any kind of relationship. But the granularity is at volume level.

Only the entire volume can be moved and not individual qtrees.




That won't help. We need to move the qtree. The idea was:

several 16 TB volumes each with 3x 5 TB Qtrees/Shares -> backup to an identical sized 16 TB sv volume (where A-SIS will be enabled when ontap 8.0.1 is out).

If the space in the sv volume for snapshots reaches the limit -> move one of the sv qtrees to a volume with more free space or create a new one.

Now it's not clear to me how to achieve this. It's looks like a task that should be easly done.


I've looked into different solutions now, but none of them seems to do what I need.

We have large amounts of data which that's why we create large shares. This is what we do now:

on primary

- set up large aggregats (40 TB)

- create volumes with a size of 16 TB (snap reserver 5%, 8 nightly snapshots)

- create 3 qtrees/shares in each volume

- set a quota of 5 TB for each qtree

- set up the snapvault schedule





on secondary

- set up aggregates, volumes for snapvault backups (90 nightly snapshots)




- set up the snapvault backup relationship and schedules

/vol/nas_vol1/share1 --> /vol/sv_vol1/share1

/vol/nas_vol1/share2 --> /vol/sv_vol1/share2

/vol/nas_vol1/share3 --> /vol/sv_vol1/share3

At this moment everything is working fine.

But due to the fact that we want to keep 90 nightly snaps on the secondary and that we have huge data sets, it's possible that a user deletes large amounts of data on the primary, which then is put in a snapshot on the secondary and then must be kept for 90 days....

This is the point when the secondary volume  /vol/sv_vol1 fills up and I must free some space.

There is enough free space in other volumes for the qtree, so the plan is to move a secondary qtree (share1) out of the old volume in a new one and free some space for the snaps of share2/3 in sv_vol1.

I've found a couple of KB entries about moving qtrees/volumes with snapmirror. But none of them seem to solve our problem.

---> we neede to keep the existing 90 nightly snaps for the moved qtree

---> we need to free space in the old secondary volume sv_vol1

* first solution: establish a new snapvault relationship to a new volume sv_vol2

start /vol/sv_vol1/share1 ----- snapvault ---->/vol/sv_vol2/share1

stop /vol/sv_vol1/share1 ----- snapvault ---->/vol/sv_vol1/share1  (qtree share1 in sv_vol1 is delete)

This works partly, the new snaps will be created in sv_vol2 and the old snapshots in sv_vol1 will be kept until the retention time expires. But I don't see how I can free space in sv_vol1? The deleted qtree is directly put in the sv_vol1 snapshot and kept for 90 days.....

* second solution: transfer qtree share1 with snapmmirror to sv_vol2

With this solution the old snaps still have to be kept in sv_vol1. And there will also no space be freed in sv_vol1 either.

Any ideas how to solve this problem?


Sorry don't have the answer but have the same problems and just want to follow this thread.



I changed my setup to 1 qtree/share per volume. Additionally I restrict the size of each data volume to max. 8 TB. With this the snapvault volume on the backup filer can grow up to 16 TB which should be enough for most situations.