Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Snapvault destination volume occupying more space than the Source volume

2013-07-04
02:43 AM
9,570 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
It is strange i have snapvaulted my volume which is 2.27TB under which i have 14 Qtrees , I don understand once after i have started the initial transfer it failed in the middle showing error NO SPACE LEFT ON THE DEVICE but i have created a volume of 2.40TB in the destination filer dont know what happened once the initial transfer finished the SECONDARY VOLUME become around (As it failed several number of time in the middle i have to keep on adding space ) 3.76TB can anybody tell why this has happened is this a bug else due to version difference ??
Ontap Versions
Primary Filer Version :- 8.1.1 7-Mode
Secondary Filer Version :- 8.0.1RC3 7-Mode.
Thanks,
Nayab
Solved! See The Solution
1 ACCEPTED SOLUTION
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nayab ,
See the snapvault volume has fractional reserve set to 100 . No need of this reserve remove it and then start vaulting
your primary qtree's to this destination .Hope this will resolve your issue .
thanks
Ashwin
11 REPLIES 11
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nayab,
Isn't your source volume deduped by any chance?
SnapVault as logical replication doesn't preserve deduplication - after the baseline is transferred you can run dedupe at the destination & *then* you should have roughly the same space occupied at both ends.
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Radek,
I have checked the source no deduplication was configured on any volume as i tried the command
zsswnetp5> sis status
No status entry found.
Now what i have to do
Thanks,
Nayab
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Single file cloning? Compression?
What's the output of df -S?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
netp5> df -S
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
/vol/eccp1/ 2215643520 0 0% 0 0% 0 0%
/vol/eccp1_archlog/ 130340572 0 0% 0 0% 0 0%
This is what i am able to see for all of my volumes
Thanks,
Nayab
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Above are two volumes what i have snapvaulted today
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can anybody help me ???
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nayab,
Can you please share the output of vol status and vol options for both the source and snapvault vol
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ashwani,
Please find the output of the above commands and also please note SNAPVAULT is not active due to Space constrains i have unscheduled everything
SOURCE
vol status eccp1
Volume State Status Options
ssweccp1 online raid_dp, flex nosnap=on, create_ucode=on, convert_ucode=on,
32-bit fractional_reserve=0,
snapshot_clone_dependency=on
Volume UUID: 0b68168c-7ef2-11e0-a4d2-00a0980f2390
Containing aggregate: 'fcaggr0'
vol options eccp1
nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=on, maxdirsize=167690, schedsnapname=ordinal,
fs_size_fixed=off, guarantee=volume, svo_enable=off, svo_checksum=off,
svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,
fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=on, dlog_hole_reserve=off,
nbu_archival_snap=off
DESTINATION
bsswvltp1> vol status sv_eccp1
Volume State Status Options
sv_ssweccp1 online raid_dp, flex
Volume UUID: 05e89bf6-e51f-11e2-9d91-00a09827b8c2
Containing aggregate: 'ataggr0'
vol options sv_eccp1
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=41861, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off
Thanks,
Nayab
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nayab ,
See the snapvault volume has fractional reserve set to 100 . No need of this reserve remove it and then start vaulting
your primary qtree's to this destination .Hope this will resolve your issue .
thanks
Ashwin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You mean you are speaking about the fractional reserve on the destination volume right ??? Now if i disable fractional reserve will i be able to reclaim free space on to the volume ??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was able to reclaim the space Thanks a lot
