ONTAP Discussions
ONTAP Discussions
Hi all
In January, I used to have the following snapmirror configuration:
filer_a:volume_cifs [source volume]
filer_b:volume_cifs [destination volume]
For bussiness requirements the relationship was reverted, having now
filer_a:volume_cifs [destination volume]
filer_b:volume_cifs [source volume]
Everything is working fine, and I had not put attention to a snapshot that is in both volumes from the day in which the reversion plan was applied (Jan)...
Until today, users from the cifs share, perform an old-data clean-up activity, after that, they asked me for the freed space, I told, as we have a snapshots retention policy to retain 7 nightly snapshots, the freed space will be recovered in one week.
However after the roll-off of all the nightly snapshots, The january snapshot has growed retainig the space that was cleaned (1TB), that have sense for me, but I don't know if there must be a special consideration to delete this january's snapshot as this was the snapshot taken the day of the revertion.
Also part of my doubt is because in the Filerview snapshot window "Status" column shows "snapmirror".
Thanks
-Victor
Solved! See The Solution
On filer_a do “snapmirror release volume_cifs filer_b:volume_cifs”. This should release old snapshot (remove “snapmirror” flag) and make it possible to remove it.
It may fail due to volume being read-only on filer_a now. In this case you will need to temporarily break snapmirror, release relationship and resync.
Please show “snapmirror status” and “snap list volume_cifs” from both filers.
Hi aborzenkov
Thiese are the outputs of the commands:
filer_a> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
filer_b:volume_cifs filer_a:volume_cifs Snapmirrored 00:24:18 Idle
filer_a> snap list volume_cifs
Volume volume_cifs
working...
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Aug 09 09:13 filer_a(12345)_volume_cifs.4625
0% ( 0%) 0% ( 0%) Aug 09 08:13 filer_a(12345)_volume_cifs.4624
0% ( 0%) 0% ( 0%) Aug 09 00:00 nightly.0
0% ( 0%) 0% ( 0%) Aug 08 18:00 hourly.0
0% ( 0%) 0% ( 0%) Aug 08 14:00 hourly.1
0% ( 0%) 0% ( 0%) Aug 08 10:01 hourly.2
0% ( 0%) 0% ( 0%) Aug 08 00:01 nightly.1
0% ( 0%) 0% ( 0%) Aug 07 00:01 nightly.2
0% ( 0%) 0% ( 0%) Aug 06 00:01 nightly.3
0% ( 0%) 0% ( 0%) Aug 05 00:01 nightly.4
0% ( 0%) 0% ( 0%) Aug 04 00:00 nightly.5
0% ( 0%) 0% ( 0%) Aug 03 00:01 nightly.6
58% (58%) 57% (57%) Jan 28 07:21 filer_b(67893)_volume_cifs.4739 (snapmirror)
58% ( 0%) 57% ( 0%) Jan 28 07:13 filer_b(67893)_volume_cifs.4738
filer_b> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
filer_a:volume_cifs filer_b:volume_cifs Broken-off 4657:10:01 Idle
filer_b:volume_cifs filer_a:volume_cifs Source 00:18:49 Idle
filer_b> snap list volume_cifs
Volume volume_cifs
working...
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Aug 09 09:13 filer_a(12345)_volume_cifs.4625 (snapmirror)
0% ( 0%) 0% ( 0%) Aug 09 00:00 nightly.0
0% ( 0%) 0% ( 0%) Aug 08 18:00 hourly.0
0% ( 0%) 0% ( 0%) Aug 08 14:00 hourly.1
0% ( 0%) 0% ( 0%) Aug 08 10:01 hourly.2
0% ( 0%) 0% ( 0%) Aug 08 00:01 nightly.1
0% ( 0%) 0% ( 0%) Aug 07 00:01 nightly.2
0% ( 0%) 0% ( 0%) Aug 06 00:01 nightly.3
0% ( 0%) 0% ( 0%) Aug 05 00:01 nightly.4
0% ( 0%) 0% ( 0%) Aug 04 00:00 nightly.5
0% ( 0%) 0% ( 0%) Aug 03 00:01 nightly.6
58% (58%) 57% (57%) Jan 28 07:21 filer_b(67893)_volume_cifs.4739 (snapmirror)
58% ( 0%) 57% ( 0%) Jan 28 07:13 filer_b(67893)_volume_cifs.4738
On filer_a do “snapmirror release volume_cifs filer_b:volume_cifs”. This should release old snapshot (remove “snapmirror” flag) and make it possible to remove it.
It may fail due to volume being read-only on filer_a now. In this case you will need to temporarily break snapmirror, release relationship and resync.
Hi aborzenkov
I run that comand and it moves the snapmirror flag
Just to make you know, there was not need to quiesce/break the volume I run the command without doing that and works without error.
Then I delete the january's snapshots and uptade the snapmirror.
The volume recover all the user's previously cleaned data
Thanks!!!
In my experience, a snapmirror release completely removes all snapmirror snapshots from the source, not just old or unused ones, thus trashing the sync between the volume. A break and resync is then required from the destination to reestablish the snapmirror relationship.