ONTAP Discussions

Fractional reserve space for snapvaulted destination volume is full

RAMACHANDRA_EA
3,718 Views

There is only one 50G lun under qtree. source volume has fraction reserve set to 65%, and gurantee is volume. This qtree is snapvaulted to destination filer with volume has 100% fractional reserve and gurantee is volume.

Snapvault is in Quiescing state for long days alerting no space left. When I check in destination volume it have free space but the fractional reserve space is FULL. Not sure why fractional reserve space is gettting full though the volume have free space in it.

How can I resolve this issue?

Snapvault status

Current Transfer Error: replication destination could not set permissions on a file or directory: No space left on device

1. whether Fractional reserve is must for snapvaulted destination volumes?

2. If so how fractional works in snapvaulted destination volumes?

3. For 50G lun whay source volume is taking 220G of data? how can I reduce this?

========================================================================================================
desfiler> df -h desvol
Filesystem               total       used      avail capacity  Mounted on
/vol/desvol/      350GB      302GB       47GB      87%  /vol/desvol/
/vol/desvol/.snapshot        0GB      253GB        0GB     ---%  /vol/desvol/.snapshot
desfiler> vol status -v desvol
         Volume State      Status            Options
desvol online     raid_dp, flex     nosnap=off, nosnapdir=off,
                                             minra=off, no_atime_update=off,
                                             nvfail=off,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             create_ucode=on,
                                             convert_ucode=on,
                                             maxdirsize=41861,
                                             schedsnapname=ordinal,
                                             fs_size_fixed=off,
                                             guarantee=volume, svo_enable=off,
                                             svo_checksum=off,
                                             svo_allow_rman=off,
                                             svo_reject_errors=off,
                                             no_i2p=off,
                                             fractional_reserve=100,
                                             extent=off,
                                             try_first=volume_grow
                Containing aggregate: 'aggr2'

                Plex /aggr2/plex0: online, normal, active
                    RAID group /aggr2/plex0/rg0: normal

desfiler> df -h -r desvol
Filesystem               total       used      avail   reserved  Mounted on
/vol/desvol/      350GB      303GB       46GB        0GB  /vol/desvol/
/vol/desvol/.snapshot        0GB      253GB        0GB        0GB  /vol/desvol/.snapshot
desfiler>
=======================================================================================================
srcfiler> df -h srcvol
Filesystem               total       used      avail capacity  Mounted on
/vol/desvol/      220GB      121GB       98GB      55%  /vol/desvol/
/vol/desvol/.snapshot        0MB       39GB        0MB     ---%  /vol/desvol/.snapshot
srcfiler>
vol status -v srcvol
         Volume State      Status            Options
srcvol online     raid_dp, flex     nosnap=off, nosnapdir=off,
                                             minra=off, no_atime_update=off,
                                             nvfail=off,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             create_ucode=on,
                                             convert_ucode=on,
                                             maxdirsize=41861,
                                             schedsnapname=ordinal,
                                             fs_size_fixed=off,
                                             guarantee=volume, svo_enable=off,
                                             svo_checksum=off,
                                             svo_allow_rman=off,
                                             svo_reject_errors=off,
                                             no_i2p=off,
                                             fractional_reserve=65,
                                             extent=off,
                                             try_first=volume_grow
                Containing aggregate: 'aggr1'

                Plex /aggr1/plex0: online, normal, active
                    RAID group /aggr1/plex0/rg0: normal
                    RAID group /aggr1/plex0/rg1: normal

srcfiler>

srcfiler> df -h -r srcvol
Filesystem               total       used      avail   reserved  Mounted on
/vol/srcvol/      220GB      121GB       98GB       32GB  /vol/srcvol/
/vol/srcvol/.snapshot        0MB       39GB        0MB        0MB  /vol/srcvol/.snapshot
srcfiler>

srcfiler> lun show -v /vol/srcvol/srcqtree/srcvol
        /vol/srcvol/srcqtree/srcvol   50.0g (53694627840)   (r/w, online, mapped)
                Serial#: XXXXXXXXXXX
                Share: none
                Space Reservation: enabled
                Multiprotocol Type: windows
                Maps: windows
srcfiler>
============================================================================================

3 REPLIES 3

nigelg1965
3,718 Views

Hi

Probably a bit late for you but, this is the no.1 google hit for this message so hopefully could help others.

I had this message and was stumped for a while. In my case it was caused by when volume snapmirroring with a lun that was over 80% full to a volume which was the same size as the source but had a different snapshot percent of 20%.

daberegg20
3,718 Views

I ran into the same problem and error, the fix for me was to disable fractional reserve on the snapmirrored destination volume.

jan_vasil
3,718 Views

Are you sure, aggregate where is located your vol/desvol/ is not full ?

Public