ONTAP Discussions

null schedule 0@- active?

PZI1234567
2,111 Views

I have a FAS3240 doing OSSV snapvault into the vfiler.  I have one schedule active defined as follows:

     snapvault snap sched -x sv_1 sv_1_D1 14@mon-sun@0

That is listed as:

     snapvault status -s sv_1

     Snapvault is ON.

     Volume               Snapshot             Status Schedule

     ------               --------             ------ --------

     sv_1        sv_1_D1     Idle            14@mon-sun@0

     sv_1                             Idle            0@-

I have a number of windows servers backed up to this volume and at most that schedule becomes active and its state is displayed like this:

     snapvault status -s sv_1

     Snapvault is ON.

     Volume               Snapshot             Status Schedule

     ------               --------             ------ --------

     sv_1        sv_1_D1     Active          14@mon-sun@0

     sv_1                             Idle            0@-

     Today I display the status and to my amazement the "null" schedule "0@-" is active and my daily is "Queued". This should not have         happened. There is only one schedule that can be scheduled at "0" hour and the "null" schedule should stay "Idle".

     What is going on?

     snapvault status status -s sv_1

     Snapvault is ON.

     Volume               Snapshot             Status Schedule

     ------               --------             ------ --------

     sv_1                             Active          0@-

     sv_1        sv_1_D1     Queued          14@mon-sun@0

I was always puzzled by the schedule with name of null string and specs to do nothing.  If you search all NetApp KB, google for it, search Data Protection pdfs and man pages you find zero/0 hits. What gives?

All the qtrees in that volume have Lag over 2 days and also a few baseline transfers going on:

snapvault status

Snapvault is ON.

Source Destination                                      State Lag        Status

172.168.75.15:e:\           ntap01-282yuma:/vol/sv_1/28815FS01 Uninitialized  -          Transferring  (18 GB done)

172.168.254.3:e:\           ntap01-282yuma:/vol/sv_1/ATLFS01-E Uninitialized  -          Transferring  (138 GB done)

172.168.254.3:systemstate   ntap01-282yuma:/vol/sv_1/ATLFS01-SS Snapvaulted    50:38:33   Idle

172.168.31.3:d:\            ntap01-282yuma:/vol/sv_1/BILFS01-D Snapvaulted    50:38:33   Idle

172.168.31.3:f:\            ntap01-282yuma:/vol/sv_1/BILFS01-F Snapvaulted    -          Quiescing

172.168.31.3:g:\            ntap01-282yuma:/vol/sv_1/BILFS01-G Uninitialized  -          Transferring  (192 GB done)

172.168.31.3:i:\            ntap01-282yuma:/vol/sv_1/BILFS01-I Snapvaulted    630:39:19  Transferring  (62 GB done)

172.168.170.45:e:\          ntap01-282yuma:/vol/sv_1/BYRFS01-E Snapvaulted    50:38:34   Idle

170.122.200.45:d:\          ntap01-282yuma:/vol/sv_1/JANWFS01-D Uninitialized  -          Transferring  (673 GB done)

172.168.205.20:e:\          ntap01-282yuma:/vol/sv_1/MERITFS01-E Uninitialized  -          Transferring (754719 inodes done)

172.168.205.20:systemstate  ntap01-282yuma:/vol/sv_1/MERITFS01-SS Snapvaulted    50:38:34   Idle

172.168.49.26:g:\           ntap01-282yuma:/vol/sv_1/TUPFS01-G Snapvaulted    444:29:19  Transferring  (91 GB done)

172.168.49.26:h:\           ntap01-282yuma:/vol/sv_1/TUPFS01-H Snapvaulted    50:38:34   Idle

172.168.49.26:systemstate   ntap01-282yuma:/vol/sv_1/TUPFS01-SS Snapvaulted    50:38:34   Idle

172.168.104.62:e:\          ntap01-282yuma:/vol/sv_1/WGYTUPFS01 Snapvaulted    50:38:31   Idle

1 REPLY 1

PZI1234567
2,111 Views

To follow my own post here is what I learned from NetApp support on 0@- schedule:

"The 0@- snap sched is an internal snapshot target, used by the snapshot coalescing code. We call it the null target, since the snapshot name is the empty string.  You can ignore the 0@ entry. You can unschedule this snapshot target if you really want to, but it will just reappear the moment SnapVault needs it again."

From my observation I figured out that this schedule becomes "Active" (apparently doing coalescing) when the currently "Active" daily schedule is waiting for one of the destination qtrees to be updated and waits for too long.  It then moves the currently "Active" daily schedule sv_1_D1 to "Queued" state and still keeps waiting for all updates to complete. If you decide that you cannot wait any longer and abort still not updated qtree updates the 0@- schedule competes coalescing and makes the daily schedule sv_1_D1 "Active" again. Unfortunately the daily snapshot is not created but a new sequence of updates of the remaining qtrees starts again.

It would be useful to know at what point a "Active" schedule decides to give up. If we knew what is that timeout we could abort the slow qtree updates before that and manage to get the daily "Active" snapshot to be created.

Can anybody comment on that time when the "Active" schedule gives up and makes "null" schedule 0@- "Active"?

Public