ONTAP Discussions

Help with snap reserver

PHANIDHAR6039
5,428 Views

Hi,

I has got a filer FAS 2050 with dual head. One of the volume got full and i am not sure where to start from

The filer (tap01b) are snap mirred on to nas01a and the volume that was full is as below which was 100% full so deleted one of the snapshot which released 5 GB as below

tap01b> df -h /vol/backup/

Filesystem               total       used      avail capacity  Mounted on

/vol/backup/              162GB      156GB     5373MB      97%  /vol/backup/

snap reserve              18GB       89GB        0GB     498%  /vol/backup/..

tap01b> snap list backup
Volume backup
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
15% (15%)    8% ( 8%)  Mar 31 12:00  hourly.0      
19% ( 5%)   11% ( 3%)  Mar 31 00:00  nightly.1     
20% ( 2%)   12% ( 1%)  Mar 30 12:00  hourly.1      
24% ( 5%)   15% ( 3%)  Mar 30 00:00  nightly.2     
28% ( 7%)   18% ( 4%)  Mar 29 00:00  nightly.3     
32% ( 7%)   22% ( 3%)  Mar 28 00:01  nightly.4     
35% ( 7%)   26% ( 4%)  Mar 27 00:00  nightly.5     
51% (34%)   50% (24%)  Mar 12 20:50 nas01a(0135024078)_backup.19109 (snapmirror)

As i can see that snap reserve has exceed its allocated size of 18 GB and its consumng the disk space. I had a look at the volume options and these are as below

tap01b> vol options /vol/backup/

nosnap=off, nosnapdir=on, minra=off, no_atime_update=off, nvfail=off,

ignore_inconsistent=off, snapmirrored=off, create_ucode=on,

convert_ucode=off, maxdirsize=18350, schedsnapname=ordinal,

fs_size_fixed=off, guarantee=volume, svo_enable=off, svo_checksum=off,

svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,

fractional_reserve=100, extent=off, try_first=volume_grow

So not sure how the reserve crossed its limit on 18GB and as i can see the snap mirror snap shot is taking around 42 GB from snap reclaimable command. Any suggestions how to limit the reserve from using complete disk space? As also i can see fractional_reserver  is set for 100% which i believe allows the reserve to use complete disk space if required. Any suggestions about how to get over come this will be great.

Thanks,

P

6 REPLIES 6

billshaffer
5,428 Views

You can't keep snapshot space from going into the "live" filesystem.  Snap reserve is only a portion of that live filesystem that you set aside specifically for snapshots.

You can turn off regular snapshots on the source volume - that will get rid of the hourly and nightly snaps, which will save some space, but that may not be an option in your environment.  You could change the snap schedule to keep fewer snaps - again, might not be an option.

You can increase the frequency of the snapmirror update.  The snapmirror snapshot tracks the changes since the last update; when you do an update, that snapshot should get replaced by a much smaller one.

If neither of these is an option, you'll have to size the volume appropriately, taking snapshot space into consideration.

Fractional reserve only comes into play if you have luns in the volume - do you?

Bill

PHANIDHAR6039
5,428 Views

Hi Bill,

Thanks for the reply. Here is the complete issue

We have filer tap01b that is mirrored onto nas01a. Due to some circumstances the nas01a shut itself down which one of the onsite people is looking into it and is currently down.

So on tap01b I have like this

tap01b> df -h backup

Filesystem               total       used      avail capacity  Mounted on

/vol/backup/      162GB      154GB     7504MB      95%  /vol/backup/

snap reserve              18GB       93GB        0GB     519%  /vol/backup/..

and snap reserve is set at 10% for this volume, snapshots at the moment are

tap01b> snap list backup
Volume backup
working....

  %/used       %/total  date          name
----------  ----------  ------------  --------
  6% ( 6%)    3% ( 3%)  Apr 02 00:00  nightly.0     
14% ( 9%)    7% ( 5%)  Apr 01 12:00  hourly.0      
26% (16%)   16% ( 8%)  Mar 31 12:00  hourly.1      
29% ( 6%)   18% ( 3%)  Mar 31 00:00  nightly.2     
33% ( 8%)   22% ( 4%)  Mar 30 00:00  nightly.3     
37% ( 8%)   26% ( 4%)  Mar 29 00:00  nightly.4     
54% (37%)   52% (26%)  Mar 12 20:50 nas01a(0135024078)_backup.19109 (snapmirror)

We have deleted couple of nightly snapshots which in turn released around more than  7GB. But I can see snap mirror snap shot (nas01a_backup) is growing in size every day as its snap mirror (nas01a) is not available to apply the update on to it.

So that in turn reduces the space up here even after deleting the snapshots.

Does breaking mirror with nas01a stop this from increasing as nas01a is not available at the moment which is still down.

Any suggestions will be great.

Thanks,

P

billshaffer
5,428 Views

Yes, if you break and release the snapmirror relationship, that snapshot will go away - but you will have to reinitialize the snapmirror when the target system comes back online.  Since the volume is so small, I would recommend this, since once the reinitialization should be pretty quick.

The only problem I see is that snapmirror commands are usually run from the target.  I don't think you'll be able to break the relationship from the source.  And I don't think you'll be able the release it at the source (which gets rid of the snapshot) without the relationship being broken.  I also don't think you can just delete the snap with it being in an active relationship.

I'll try to spend a little time today trying to find something - if you have support, now might be the time to call and get the "official" method...

Bill

billshaffer
5,428 Views

The snapmirror snapshot isn't "busy", so it's possible you can just release the relationship - try "snapmirror release tap01b:backup <dest_volname>" (in my experience this command will complain about no snapshots to remove even though it removes snapshots...).  If that doesn't work, try doing a snap delete.  Neither of these should do any harm - they'll just error out if they don't work.

Bill

PHANIDHAR6039
5,428 Views

Thanks Bill for the help.

I did run below commands and it worked out perfectly.

tap01b> snapmirror release backup nas01a:backup

tap01b> snapmirror break backup

That has released 48GB and we have broken the snapmirror as well.

We will reinitialize snapmirror once the nas01a is backup by using below commands to work normally from destination to resync them

nas01a> snapmirror resync -S tap01b:backup -w nas01a:backup

Hope havent missed anything here.

Once again thanks for your support.

Thanks,

P

billshaffer
5,428 Views

You won't be able to resync, because you're removed the source snapshot.  You'll have to reinitialize (snapmirror initialize -S...) - but like I said, the volume is so small that it shouldn't be too painful.

Bill

Public