Data Backup and Recovery

SnapCenter backups failing due to busy "SIS Clone" snapshot

muzzy543
368 Views

My Oracle DBA's have reported issues using SnapCenter whereby following a restore subsequent backups fail due to an undeletable snapshot owned by "SIS Clone".  This also results in no snapshots possible to be created (e.g. manual or scheduled SnapMirror snapshots).  I have a case raised but thought I'd post here as well.  I've looked through mgwd, ems, and sktrace logs but not spotted anything.  From their testing it seems consistent that when they follow the scenario  "backup->restore->backup" that they get this problem.

 

SnapCenter Server 4.6 P1 Build 4.6.0.7006

OnTap 9.7P17

9 REPLIES 9

Ontapforrum
349 Views

As a workaround, could you try login to : System Manager ONTAP GUI or CLI.
1) Identify the cloned_volume under the SVM.
2) Unmap it (if still mapped)
3) Offline & Delete ?
4) Try backup again

 

As you mentioned, you have already logged a ticket with NetApp. They will require logs, so I am hoping all SC logs are uploaded.

muzzy543
341 Views

Hi, I don't have a clone?  the restored volume is just a normal volume and I can't delete it as it has live data.  I am a bit confused. 

muzzy543
339 Views

The problem is occurring for every database we've tried so I wonder if this is a bug?

Ontapforrum
337 Views

Ok, I see. SIS-Clone term applies to file/lun clone. Are you able to see any cloned_LUNs in the NetApp Storage side, under the volume?

muzzy543
323 Views

To clarify, "SIS Clone" is the owner of the permanently busy snapshot that I am unable to delete.  There are no LUN's.

Ontapforrum
322 Views

Could you share this output:

::> snapshot show -owners "SIS Clone"

muzzy543
320 Views

Not much to see!  I've had to redact the svm and volume name.

 

::> snapshot show -owners "SIS Clone"

svm
volume
volume_10-21-2022_16.03.08.1080_0
507.2MB 1% 4%

Ontapforrum
318 Views

Ok. Let NetApp investigate as they need to look at SC logs as well.  Let us know.

 

Any chance that 'backups' ran while restore was in progress? 

muzzy543
217 Views

This is gone to NetApp engineering  to resolve .  Something to do with "file clone split" and a split load that hasn't completed.  The issue is only seen with volumes on a particular node.  See below.

 

cluster0001::*> file clone split load show
Node Max Current Token Allowable
Split Load Split Load Reserved Load Split Load
------------------------------- ---------- ---------- ------------- ----------
node0101 54.84TB 0B 0B 54.84TB
node0102 54.84TB 0B 0B 54.84TB
node0103 54.84TB 0B 0B 54.84TB
node0104 54.84TB 0B 0B 54.84TB
node0105 109.7TB 0B 0B 109.7TB
node0106 109.7TB 28.66TB 0B 81.03TB   <<<<<<<<<<<<<<<<<< problem here
node0107 54.84TB 0B 0B 54.84TB
node0108 54.84TB 0B 0B 54.84TB
node0109 109.7TB 0B 0B 109.7TB
node0110 109.7TB 0B 0B 109.7TB
node0111 109.7TB 0B 0B 109.7TB
node0112 109.7TB 0B 0B 109.7TB
12 entries were displayed.

Public