We have not discovered a pattern here. There are 12 different DBs being backed up, and every other night or so one or two of them will faile (with a "Critical" error) on failure to delete a snapshot with the reason "snapmirror":
# cat -n PDB1d.out.20111102190002
...
1698 ########## Running NetApp Snapshot Delete on Primary anpdfil2 ##########
1699 [Wed Nov 2 19:00:42 2011] WARN: More than 14 NetApp snapshots exist, older snapshots of anpdfil2:OraPdb1Data will be automatically deleted!
1700 [Wed Nov 2 19:00:42 2011] WARN: Deleting NetApp Snapshot PDB1-daily_20111015190000 on anpdfil2:OraPdb1Data
1701 [Wed Nov 2 19:00:42 2011] DEBUG: ZAPI REQUEST
1702 <snapshot-delete>
1703 <volume>OraPdb1Data</volume>
1704 <snapshot>PDB1-daily_20111015190000</snapshot>
1705 </snapshot-delete>
1706
1707 [Wed Nov 2 19:00:42 2011] DEBUG: ZAPI RESULT
1708 <results status="failed" errno="16" reason="snapmirror"></results>
1709
1710 [Wed Nov 2 19:00:42 2011] ZAPI: snapmirror
1711 [Wed Nov 2 19:00:42 2011] ERROR: [scf-00013] NetApp Snapshot Delete of PDB1-daily_20111015190000 on anpdfil2:OraPdb1Data failed! Exiting
We are using SC to kick off the SnapMirror. This cleanup is the last thing SC is doing, and it does seem to eventually clean them up the next run (if I don't clean it up manually).
Any ideas where to look next?
:-Dan