ONTAP Discussions

Snapmirror destination volume allocated/used space questions

f_duranti
6,204 Views

Hi, I've some strange (in my opinion) behaviour on the allocated space on snapmirrored volume on secondary storage.

I've some volume replicated from a primary to a secondary storage and I was expecting that the allocated space on the snapmirrored volume (with guarantee=none) was about the same as the source used space.

The source volume have guarantee=file, I've changed one to guarantee=none (to have the same setting on source/destination) and I've done a Snapmirror update but nothing changed.

This is the situation:

Source:

VolumeAllocatedUsedGuarantee
DPI_Data                        210GB       209GB        none
DPI_Exe                          52GB        51GB        file
DPI_Log                        1017MB       968MB        file
DPI_Temp                         65MB      4248KB        file

Destination:

VolumeAllocatedUsedGuarantee
DPI_Data_BK                     277GB       209GB        none
DPI_Exe_BK                      119GB        51GB        none
DPI_Log_BK                       69GB       912MB        none
DPI_Temp_BK                      68GB      4740KB        none

As you can see the first 2 are 210/52 GB allocated on the source and 277/119 allocated on the destination. It seems a bit strange ...

There's anything I can check?

The Data Ontap version is 7.3.6P2.

11 REPLIES 11

aborzenkov
6,075 Views

For a start - is it VSM or QSM?

scottgelb
6,075 Views

What is the output of "vol size volname" on source and target?  Sounds like the volume sizes don't match...bigger is ok on the target but fs_size_fixed right sizes anyway (if VSM).

f_duranti
6,075 Views

The relation are VSM.

The volume was created with OnCommand and provisioned as secondary (so their size was the size of the aggregate where they live but with guarantee=none and fs_size_fixed they should allocate as much as the source).

Those are the data of one of the volumes in source and destination:

Source

vol size DPI_Exe

vol size: Flexible volume 'DPI_Exe' has size 100g.

vol status -b DPI_Exe

Volume              Block Size (bytes)  Vol Size (blocks)   FS Size (blocks)  

------              ------------------  ------------------  ----------------  

DPI_Exe             4096                26214400            26214400          

vol options DPI_Exe

nosnap=on, nosnapdir=on, minra=off, no_atime_update=on, nvfail=off,

ignore_inconsistent=off, snapmirrored=off, create_ucode=on,

convert_ucode=on, maxdirsize=31457, schedsnapname=ordinal,

fs_size_fixed=off, compression=off, guarantee=file, svo_enable=off,

svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,

no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,

read_realloc=off, snapshot_clone_dependency=off

Destination:

vol size DPI_Exe_BK

Warning: Volume 'DPI_Exe_BK' has fs_size_fixed option set.  The file system

size may differ from the volume size.

See 'vol status -b' for more detail.

vol size: Flexible volume 'DPI_Exe_BK' has size 12898827840k.

vol status -b DPI_Exe_BK

Volume              Block Size (bytes)  Vol Size (blocks)   FS Size (blocks)  

------              ------------------  ------------------  ----------------  

DPI_Exe_BK          4096                3224706960          26214400          

vol options DPI_Exe_BK

nosnap=on, nosnapdir=on, minra=off, no_atime_update=on, nvfail=off,

ignore_inconsistent=off, snapmirrored=on, create_ucode=on,

convert_ucode=on, maxdirsize=31457, schedsnapname=ordinal,

fs_size_fixed=on, compression=off, guarantee=none, svo_enable=off,

svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,

no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,

read_realloc=off, snapshot_clone_dependency=off

aborzenkov
6,075 Views

So far my only idea is - primary had so much data at some point and secondary did not yet catch up. Ho w often is snapmirror run?

f_duranti
6,075 Views

Snapmirror is donne 1 time a day and the data on those volumes is quite stable, as you can see the used space is almost the same on source and secondary.

On the Exe volume the allocated on secondary is also greater than the volume size on the primary (119 vs 100 gb).

RENAUD_TOUILLET
6,075 Views

Hello Francesco,

Did you find a reason? I have the same behavior. All my destination volumes are ~74GB bigger than the source for the allocated space.When you have hundreds of volumes it begins to eat a lot of space!

I'm also using VSM managed by DFM 5.

Thanks.

jakob_bena
6,070 Views

Maybe this can help you to find your miss configuration.

open CLI to source and destination filer. then try to update / restart your snap mirror.

in your terminal session, you can see the problem of your snapmirror.

post this output, and then we can help you.

regards

jakob

RENAUD_TOUILLET
6,070 Views

Hello Jakob,

Thanks for the answer. I don't have any issue with replication it self. It works perfectly. The only question is why SM (or maybe DFM, who knows with this ugly tool) create a volume with ~70GB more on the allocated space than the source volume...

Source:

Volume                     Allocated            Used                 Guarantee

cifs_source               380492464KB     377732840KB     none

Destination

Volume                      Allocated        Used   Guarantee
cifs_destination 454097184KB 377379144KB        none

Used size is almost the same BUT allocated is ~70GB more than the source. And it's the same behavior for all volumes

Any idea?

Thanks

aborzenkov
6,070 Views

As guarantee is set to “none”, allocated size does not really matter from space consumption PoV. It could be safety measure in case your source volume grows (just guessing).

LTDCLSERGAO
4,806 Views

Hello,

Could you give the result of " df -h cifs_source"  and "df -h cifs_destination" ?

frank_iro
4,806 Views

I had a similar issues sometime back. Someone suggested that the space allocated for the destination volume may be equal to the volume auto-grow maximum size setting on the source volume. This would make sense as the destination cannot be resized (fs_size_fixed is on), so you'd need to make sure it's as large as the maximum size the source volume could grow to.

Public