Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Snapmirror destination volume allocated/used space questions

2012-04-11
08:57 AM
7,852 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I've some strange (in my opinion) behaviour on the allocated space on snapmirrored volume on secondary storage.
I've some volume replicated from a primary to a secondary storage and I was expecting that the allocated space on the snapmirrored volume (with guarantee=none) was about the same as the source used space.
The source volume have guarantee=file, I've changed one to guarantee=none (to have the same setting on source/destination) and I've done a Snapmirror update but nothing changed.
This is the situation:
Source:
Volume | Allocated | Used | Guarantee |
DPI_Data | 210GB | 209GB | none |
DPI_Exe | 52GB | 51GB | file |
DPI_Log | 1017MB | 968MB | file |
DPI_Temp | 65MB | 4248KB | file |
Destination:
Volume | Allocated | Used | Guarantee |
DPI_Data_BK | 277GB | 209GB | none |
DPI_Exe_BK | 119GB | 51GB | none |
DPI_Log_BK | 69GB | 912MB | none |
DPI_Temp_BK | 68GB | 4740KB | none |
As you can see the first 2 are 210/52 GB allocated on the source and 277/119 allocated on the destination. It seems a bit strange ...
There's anything I can check?
The Data Ontap version is 7.3.6P2.
11 REPLIES 11
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For a start - is it VSM or QSM?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What is the output of "vol size volname" on source and target? Sounds like the volume sizes don't match...bigger is ok on the target but fs_size_fixed right sizes anyway (if VSM).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The relation are VSM.
The volume was created with OnCommand and provisioned as secondary (so their size was the size of the aggregate where they live but with guarantee=none and fs_size_fixed they should allocate as much as the source).
Those are the data of one of the volumes in source and destination:
Source
vol size DPI_Exe
vol size: Flexible volume 'DPI_Exe' has size 100g.
vol status -b DPI_Exe
Volume Block Size (bytes) Vol Size (blocks) FS Size (blocks)
------ ------------------ ------------------ ----------------
DPI_Exe 4096 26214400 26214400
vol options DPI_Exe
nosnap=on, nosnapdir=on, minra=off, no_atime_update=on, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=on, maxdirsize=31457, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=file, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off
Destination:
vol size DPI_Exe_BK
Warning: Volume 'DPI_Exe_BK' has fs_size_fixed option set. The file system
size may differ from the volume size.
See 'vol status -b' for more detail.
vol size: Flexible volume 'DPI_Exe_BK' has size 12898827840k.
vol status -b DPI_Exe_BK
Volume Block Size (bytes) Vol Size (blocks) FS Size (blocks)
------ ------------------ ------------------ ----------------
DPI_Exe_BK 4096 3224706960 26214400
vol options DPI_Exe_BK
nosnap=on, nosnapdir=on, minra=off, no_atime_update=on, nvfail=off,
ignore_inconsistent=off, snapmirrored=on, create_ucode=on,
convert_ucode=on, maxdirsize=31457, schedsnapname=ordinal,
fs_size_fixed=on, compression=off, guarantee=none, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So far my only idea is - primary had so much data at some point and secondary did not yet catch up. Ho w often is snapmirror run?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Snapmirror is donne 1 time a day and the data on those volumes is quite stable, as you can see the used space is almost the same on source and secondary.
On the Exe volume the allocated on secondary is also greater than the volume size on the primary (119 vs 100 gb).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Francesco,
Did you find a reason? I have the same behavior. All my destination volumes are ~74GB bigger than the source for the allocated space.When you have hundreds of volumes it begins to eat a lot of space!
I'm also using VSM managed by DFM 5.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Maybe this can help you to find your miss configuration.
open CLI to source and destination filer. then try to update / restart your snap mirror.
in your terminal session, you can see the problem of your snapmirror.
post this output, and then we can help you.
regards
jakob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Jakob,
Thanks for the answer. I don't have any issue with replication it self. It works perfectly. The only question is why SM (or maybe DFM, who knows with this ugly tool) create a volume with ~70GB more on the allocated space than the source volume...
Source:
Volume Allocated Used Guarantee
cifs_source 380492464KB 377732840KB none
Destination
Volume | Allocated | Used | Guarantee |
cifs_destination | 454097184KB | 377379144KB | none |
Used size is almost the same BUT allocated is ~70GB more than the source. And it's the same behavior for all volumes
Any idea?
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
As guarantee is set to “none”, allocated size does not really matter from space consumption PoV. It could be safety measure in case your source volume grows (just guessing).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Could you give the result of " df -h cifs_source" and "df -h cifs_destination" ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had a similar issues sometime back. Someone suggested that the space allocated for the destination volume may be equal to the volume auto-grow maximum size setting on the source volume. This would make sense as the destination cannot be resized (fs_size_fixed is on), so you'd need to make sure it's as large as the maximum size the source volume could grow to.
