ONTAP Discussions
ONTAP Discussions
I have one particular volume for which we don’t have a snapshot schedule on. The volume snapmirrors to another site, therefore the only snapshot that should be present is the one generated by the snapmirror job.
However, it seems to want to keep hold of them, therefore eventually the volume fills and goes offline. I have 4 other volumes configured in the same way, that do not display this behaviour. The only difference is that the rate of change of this particular volume is much higher on a daily basis because it is a backup drive
I have set autodelete to delete the oldest first but this doesnt seem to delete. The volume is set to auto grow/shrink
Solved! See The Solution
It sounds like the rate of change on this volume is causing the snapshot, that's created by snapmirror, to fill up the volume. Having autodelete enabled for the snapshot will not help here as that snapmirror snapshot is locked by the system. Have you thought about changing the snapmirror schedule on that volume to replicate the data more frequently? The other option would be to turn off snapshot deletion so that the volume only autogrows.
It sounds like the rate of change on this volume is causing the snapshot, that's created by snapmirror, to fill up the volume. Having autodelete enabled for the snapshot will not help here as that snapmirror snapshot is locked by the system. Have you thought about changing the snapmirror schedule on that volume to replicate the data more frequently? The other option would be to turn off snapshot deletion so that the volume only autogrows.
hi thanks for replying so quickly.
so if they do what your suggesting how will the volume stop from filling up and going offline. they only want to keep 1
If you change the snapmirror replication schedule, to replicate more freqently, that will limit the change delta between that last snapmirror and the active data.
If you change the volume option to either autogrow first or disable snapshot deletion, then that should allow the volume to grow as needed, provided you the appropriate space on the aggregate.
hi Trubida
They want to be able to keep one snapmirror copy on the destination and wne the time comes to do another once the new copy completes it deletes the older. which doesnt seem to be working
Let's talk about the way snapmirror replciation works.
When you configure snapmirror and perform the initial replication on a volume a snapshot occurs. That snapshot then grows as changes occur on the volume. Once you update the snapmirror relationship a new snapshot occurs, data is transfered and the first snapshot is deleted.
It sounds like the snapmirror replication isn't completing, because the source volume is filling and goes offline. If snapmirror was able to replicate all of the data then you would only end up with a single snapshot.
Have you tried to modify the schedule so the snapmirror replication occurs more frequently?
DC01-AFF200::volume efficiency> snap list -fields snapmirror-label,create-time -volume vol_ifs_ora_devtest_N_Backups
vserver volume snapshot create-time snapmirror-label
------- ----------------------------- -------------------------------------- ------------------------ ----------------
FAVSVM vol_ifs_ora_devtest_N_Backups {5eb30570-f9b4-4b42-9cd6-81a3ff9fcbdb} Wed Aug 14 02:42:38 2019 –
Snapdrive (or something telling snapdrive) is creating it at 2:42am. I’ve saved off the filtered eventlog and attached.
I really have no idea how to workout what’s making snapdrive take the snapshot – as far as I can see there is nothing scheduled to run around that time.
we suspect It’s between Windows VSS\DataOntap VSS & Snapdrive – but can’t work it out yet.
Is there a way can we get this snapshot autodeleted? Through SnapCli?
also putting this in also
DC01-AFF200::volume efficiency> snap list -fields snapmirror-label,create-time -volume vol_ifs_ora_devtest_N_Backups
vserver volume snapshot create-time snapmirror-label
------- ----------------------------- -------------------------------------- ------------------------ ----------------
FAVSVM vol_ifs_ora_devtest_N_Backups {30ae4f42-1037-4139-81ce-0cbd6ec46ae2} Mon Aug 12 11:22:52 2019 -
FAVSVM vol_ifs_ora_devtest_N_Backups {2e6e3bf7-029b-4e58-b039-d5f38d005c16} Tue Aug 13 02:42:31 2019 -
FAVSVM vol_ifs_ora_devtest_N_Backups snapmirror.60e3a5ca-e34b-11e8-b79a-00a098bf1c0d_2161878825.2019-08-13_121540
Tue Aug 13 13:15:40 2019 -
3 entries were displayed.
Snap List commands after 16:00:
DC01-AFF200::volume efficiency> snap list -fields snapmirror-label,create-time -volume vol_ifs_ora_devtest_N_Backups
vserver volume snapshot create-time snapmirror-label
------- ----------------------------- -------------------------------------- ------------------------ ----------------
FAVSVM vol_ifs_ora_devtest_N_Backups {30ae4f42-1037-4139-81ce-0cbd6ec46ae2} Mon Aug 12 11:22:52 2019 -
FAVSVM vol_ifs_ora_devtest_N_Backups {2e6e3bf7-029b-4e58-b039-d5f38d005c16} Tue Aug 13 02:42:31 2019 -
FAVSVM vol_ifs_ora_devtest_N_Backups snapmirror.60e3a5ca-e34b-11e8-b79a-00a098bf1c0d_2161878825.2019-08-13_160000
Tue Aug 13 16:00:00 2019 -
3 entries were displayed.
Its the two i have made bold that is causing the issue as the client doesnt know where these mirrors are being created from
Can you look at the event logs , in diag mode, and see what's creating the backups?