General Discussion

Snapshot Deletion - Ignore Owners


hello all,


when deleting a snapshot in CDOT we can use the variable [-ignore-owners], how can i use that in 7mode version [NetApp Release 8.2.3P6 7-Mode] ?




i was actually able to view the LUNs i am looking for on the vfiler level [vfiler run VfilerName lun show] so i destroyed them and that solved the problem, much thanks for your response and support as usual 🙂

View solution in original post





What error message you are getting when you try to delete?

Also, you may follow the inductions listed in the below document:

Deleting busy Snapshot copies 


i did actually follow this document because my snapshots are in busy state, but the issue is when i try to run the command [lun snap usage [-s] vol_name snap_name ] i get LUNs that are locking the deletion, when i try to destroy the LUN the systems shows that LUN doesn't exists, thats why i am asking if there is a way to ignore the snapshot owner in 7mode or maybe delete those LUNs somehow ?


Hi Mohamed Shehata,


When you compare the lun snap usage to the lun show output do you see the lun that you are trying to delete in "lun show"?  The lun may already be deleted and you need to just delete the snapshot.  Here is some additional information on deleting backing snapshot copies when a lun is deleted.  


You can also reference KB: How to delete snapshots in the (busy,LUN) state 


It sounds like there are some additional snapshots that might first need to be deleted before you can delete the original snapshot that is showing  (busy, LUN).  Let us know if turning on the snapshot_clone_dependency helps clear up the busy,LUN snapshot so it can be deleted.



Team NetApp


Hi darb0505,

I cannot see the lun when I use the command [lun show] and when I show the specific lun the output is [lun doesn't exists], I have deleted all the snapshots related to that volume except for the busy ones which cannot be deleted (the issue I am trying to solve) and I cannot find the option [snapshot_clone_dependency] on the system -> in advanced and regular mode I used the command [options] to view them all of them but I cannot find it.



Hi MohamedShehata,


Are you looking for the snapshot_clone_dependency in the options list for the ONTAP system? 




fas01> options snap
snaplock.autocommit_period   none
snaplock.compliance.write_verify off
snaplock.log.default_retention 6m
snaplock.log.maximum_size    10m
snapmirror.access            legacy
snapmirror.checkip.enable    off
snapmirror.cmode.suspend     off
snapmirror.delayed_acks.enable on
snapmirror.enable            off
snapmirror.log.enable        on
snapmirror.vbn_log_enable    off
snapmirror.volume.local_nwk_bypass.enable on
snapmirror.vsm.volread.smtape_enable on



If you are looking at the options you will not see the snapshot_clone_dependency option there.  This is an option in the vol options.  See below:


fas01> vol options
vol options: No volume name supplied.
vol options <vol-name> <option-name> <option-val>
The following commands are available; for more information
type "vol help options <command>"
acdirmax            flexcache_autogrow  nosnap              snaplock_default_period
acdisconnected      flexcache_min_reserve nosnapdir           snaplock_maximum_period
acregmax            fractional_reserve  nvfail              snaplock_minimum_period
acsymmax            fs_size_fixed       raidsize            snapmirrored
actimeo             guarantee           raidtype            snapshot_clone_dependency
convert_ucode       maxdirsize          read_realloc        svo_allow_rman
create_ucode        minra               resyncsnaptime      svo_checksum
disconnected_mode   nbu_archival_snap   root                svo_enable
dlog_hole_reserve   no_atime_update     schedsnapname       svo_reject_errors
extent              no_i2p              snaplock_autocommit_period try_first



What you want to do is run the following command: vol options <vol-name> snapshot_clone_dependency <on | off>


This is off by default so you will want to set it to on for the volume in question.  Once it is on you should be able to go in and delete the snapshot in question.


Let me know if you have any issues.



Team NetApp


Hello darb0505,

I found the option is set to [on] so I made it [off] but still have the same issue, there is a lun locking the snapshot deletion which I can view by the command [lun snap usage] but I can't find or destroy that lun


i need to add that all the LUNs which are locking the snapshot deletion got in their name [.rws], is there like a directory in the storage i can manually delete these LUNs from ?

Check this kb:


If you have already followed it, then could you share the 'snap list' and 'vol status -v' output from your filer ?  If you cannot share the details, you may try raising a ticket, engineer might help you resolve this via remote session.


i've reviewed the document but also i can't find a solution, let me share the commands you asked for:


FilerName> vol status -v VolName
Volume State Status Options
VolName online raid_dp, flex nosnap=off, nosnapdir=off, minra=off,
64-bit no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off,
create_ucode=on, convert_ucode=on,
maxdirsize=73400, schedsnapname=create_time,
fs_size_fixed=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=0, extent=off,
try_first=snap_delete, read_realloc=off,
dlog_hole_reserve=off, nbu_archival_snap=off
Volume UUID: 7b8eafca-c6b0-4bbb-a685-9d0673603492
Containing aggregate: 'ag1'

Plex /ag1/plex0: online, normal, active
RAID group /ag1/plex0/rg0: normal, block checksums
RAID group /ag1/plex0/rg1: normal, block checksums
RAID group /ag1/plex0/rg2: normal, block checksums

Snapshot autodelete settings for VolName:
prefix=(not specified)
Volume autosize settings:
Hybrid Cache:



FilerName> snap list VolName
Volume VolName

%/used %/total date name
---------- ---------- ------------ --------
1% ( 1%) 0% ( 0%) Oct 16 01:00 sqlsnap__Server1_10-16-2020_01.00.11
32% (32%) 6% ( 6%) Sep 08 01:00 sqlsnap__Server1_09-08-2020_01.00.09 (busy,LUNs)
57% (45%) 17% (11%) May 25 01:01 sqlsnap__Server1_05-25-2020_01.00.19 (busy,LUNs)
68% (45%) 28% (11%) Apr 23 01:00 sqlsnap__Server1_04-23-2020_01.00.11 (busy,LUNs)
75% (45%) 39% (11%) Mar 20 01:00 sqlsnap__Server1_03-20-2020_01.00.10 (busy,LUNs)
77% (26%) 44% ( 5%) Aug 26 01:01 sqlsnap__Server1_08-26-2019_01.00.20 (busy,LUNs)
77% ( 1%) 44% ( 0%) Aug 25 01:01 sqlsnap__Server1_08-25-2019_01.00.21 (busy,LUNs)
79% (26%) 49% ( 5%) Aug 02 01:00 sqlsnap__Server1_08-02-2019_01.00.20 (busy,LUNs)
80% (22%) 52% ( 4%) Jul 24 01:00 sqlsnap__Server1_07-24-2019_01.00.14 (busy,LUNs)
81% (19%) 55% ( 3%) Jul 12 01:00 sqlsnap__Server1_07-12-2019_01.00.10 (busy,LUNs)
81% (12%) 57% ( 2%) Jun 30 01:00 sqlsnap__Server1_06-30-2019_01.00.12 (busy,LUNs)
82% (14%) 59% ( 2%) May 28 01:00 sqlsnap__Server1_05-28-2019_01.00.14 (busy,LUNs)
82% (14%) 61% ( 2%) Apr 03 01:00 sqlsnap__Server1_04-03-2019_01.00.12 (busy,LUNs)



ok, output does help.


The snapshot naming 'sqlsnap__*' indicates that these snaps are by-product of "SnapManager for SQL Server". Could you login to your SMSQL (Windows) box (Which would also have snapdrive installed) and take a look at the cloned luns. It may not be connected to hosts but it will be existing inside a snap. You can try splitting,deleting clone using 'snapdrive' first and then snaps could be deleted.


Please see these 2 kb:



i was actually able to view the LUNs i am looking for on the vfiler level [vfiler run VfilerName lun show] so i destroyed them and that solved the problem, much thanks for your response and support as usual 🙂


Hi Mohamed Shehata,


Glad to here that you were able to see and destroy the luns which corrected the issue. Let us know if you need any other assistance.



Team NetApp