ONTAP Discussions
ONTAP Discussions
Hi,
Environment:
- ONTAP 8.1.4P1 7-Mode
- Snapvaults
Issue: On a qtree that does not exist anymore; still showing contents state as transitioning. This is revealed using the snap list -q on the snapvault destination.
NA01(0151762019)_sv_backup03_sv_silver11-dst.0 (Jan 16 16:23)
sv_silver03 Replica Jan 16 01:00 NA01:/vol/silver03
sv_silver01 Replica Jan 16 01:00 NA01:/vol/silver01
sv_silver05 Replica Jan 16 01:00 NA01:/vol/silver05
sv_silver07 Replica Jan 16 01:00 NA01:/vol/silver07
sv_silver09 Replica Jan 16 01:00 NA01:/vol/silver09
sv_silver11 Transitioning - -
sv_silver does not exist anymore nor does it show in the snapvault snap schedule.
How do I get rid of this transitioning state? It has past the retention schedule.
Can someone please shed some light on this?
"Contents Indicates whether the contents of the destination volume or qtree in the active file system are up-to-date replicas or in transition. The field applies only to the destination side.
Thanks
Jeff
Solved! See The Solution
Ok, so where are you pulling that from? sv_backup03?
If that's the case if you try to delete that snap it will say it's busy right?
If that's the case, one of the only ways I was able to get past this transitioning snap was to do a takeover and giveback of the cluster or a reboot if its standalone. Once that's done, it will release the *lock* on that snap and you should be able to delete it.
Give that a shot
Ok, i've been through this before..
I don't know exactly what causes this.. So.. just so i'm sure, sv_silver11
show me snap list from volume please..
Then I will make final recommendation
Hi,
sv_silver11 was a qtree snapvault of volume silver11, ergo a snap list of silver11 does not exist as both qtree and volume no longer exist.
Hope that helps.
Jeff
Ok, so where are you pulling that from? sv_backup03?
If that's the case if you try to delete that snap it will say it's busy right?
If that's the case, one of the only ways I was able to get past this transitioning snap was to do a takeover and giveback of the cluster or a reboot if its standalone. Once that's done, it will release the *lock* on that snap and you should be able to delete it.
Give that a shot
Hi,
Right, the history is volume silver11 was deleted and it's snapvault destinations sv_silver11 lived on sv_backup03.
Although you have answered my question indirectly, sv_silver11 is part of a live the live snapvault sv_backup03 in which other volumes are part of that base. I cannot delete that base of course.
I think I am snookered, as there has been takeovers and givebacks so any locks should have been released?
Thanks
Jeff
^^
show me just snap list of the dst volume
NA01> snap list sv_backup03
Volume sv_backup03
working...
%/used %/total date name
---------- ---------- ------------ --------
4% ( 4%) 1% ( 1%) Aug 19 08:07 NA01(0151762019)_sv_backup03-base.1 (busy,snapvault)
4% ( 0%) 1% ( 0%) Aug 19 02:02 sv_nightly.0
4% ( 1%) 1% ( 0%) Aug 18 02:05 sv_weekly.0
5% ( 1%) 1% ( 0%) Aug 16 02:02 sv_nightly.1
6% ( 1%) 2% ( 0%) Aug 15 02:07 sv_nightly.2
6% ( 1%) 2% ( 0%) Aug 14 01:58 sv_nightly.3
7% ( 1%) 2% ( 0%) Aug 13 01:51 sv_nightly.4
8% ( 1%) 2% ( 0%) Aug 12 02:01 sv_nightly.5
9% ( 1%) 2% ( 0%) Aug 11 02:11 sv_weekly.1
9% ( 1%) 2% ( 0%) Aug 09 02:33 sv_nightly.6
10% ( 1%) 3% ( 0%) Aug 08 02:42 sv_nightly.7
10% ( 1%) 3% ( 0%) Aug 07 02:16 sv_nightly.8
11% ( 1%) 3% ( 0%) Aug 06 01:59 sv_nightly.9
11% ( 1%) 3% ( 0%) Aug 05 01:47 sv_nightly.10
12% ( 1%) 3% ( 0%) Aug 04 02:06 sv_weekly.2
12% ( 1%) 3% ( 0%) Aug 02 01:57 sv_nightly.11
14% ( 2%) 4% ( 1%) Aug 01 05:16 sv_nightly.12
14% ( 1%) 4% ( 0%) Jul 31 02:49 sv_nightly.13
15% ( 1%) 4% ( 0%) Jul 30 02:10 sv_nightly.14
15% ( 1%) 4% ( 0%) Jul 29 02:58 sv_nightly.15
16% ( 1%) 5% ( 0%) Jul 28 02:37 sv_weekly.3
16% ( 1%) 5% ( 0%) Jul 26 02:23 sv_nightly.16
17% ( 1%) 5% ( 0%) Jul 25 02:13 sv_nightly.17
17% ( 1%) 5% ( 0%) Jul 24 02:25 sv_nightly.18
18% ( 1%) 5% ( 0%) Jul 23 02:10 sv_nightly.19
18% ( 1%) 5% ( 0%) Jul 22 02:08 sv_nightly.20
19% ( 1%) 6% ( 0%) Jul 21 02:22 sv_weekly.4
19% ( 1%) 6% ( 0%) Jul 19 02:18 sv_nightly.21
20% ( 1%) 6% ( 0%) Jul 18 02:23 sv_nightly.22
20% ( 1%) 6% ( 0%) Jul 17 02:20 sv_nightly.23
20% ( 1%) 6% ( 0%) Jul 16 02:18 sv_nightly.24
21% ( 1%) 6% ( 0%) Jul 15 02:09 sv_nightly.25
21% ( 1%) 7% ( 0%) Jul 14 02:29 sv_weekly.5
22% ( 1%) 7% ( 0%) Jul 12 03:12 sv_nightly.26
23% ( 2%) 7% ( 0%) Jul 11 02:36 sv_nightly.27
24% ( 2%) 8% ( 1%) Jul 07 02:56 sv_weekly.6
25% ( 2%) 8% ( 0%) Jun 30 02:47 sv_weekly.7
34% ( 4%) 13% ( 1%) Jan 16 16:23 NA01(0151762019)_sv_backup03_sv_silver11-dst.0
NA01>
Hmm.. interesting issue..
Ok, so let me ask you this question.
This is the "stale" snapshot you are referring to
NA01(0151762019)_sv_backup03_sv_silver11-dst.0
I assume you tried to delete this snap right?
Hi,
I enabled cifs show snapshot.
If I browse to the snapvault destination \\NA01\sv_backup03\~snapshot\ISTCNA01(0151762019)_backup03_sv_silver11-dst.0\sv_silver11\.snapmirror_no_access_to_this_tmp_dir__snapvault_stop_purgatory_6
Maybe that purgatory references something to the developers as I do not know what that means in this scenario. It was definitely in purgatory literally.
I can now delete it (not busy) and have deleted it.
It was also showing on my Windows 2008 as taking up 7.9TB, which would have exceed the snapvault destination sv_backup03 and other snapvaults. (odd, maybe Windows error or as is with SIS?)
Any reference to *silver11* is now extinct.
Thanks
Jeff
I assume you used snap delete command right? I got confused when you said, see snapshot..
Sorry, Yes I used the snap delete command. I just wanted to browse to that folder to see details. I could not snap delete previously as it stated it was busy.
I do not know what transpired to cause this. But it is clean now and all reporting back to normal to what actually exists.
Never the less all data was intact and all is good.
Thanks
Jeff