2013-07-04 02:43 AM
It is strange i have snapvaulted my volume which is 2.27TB under which i have 14 Qtrees , I don understand once after i have started the initial transfer it failed in the middle showing error NO SPACE LEFT ON THE DEVICE but i have created a volume of 2.40TB in the destination filer dont know what happened once the initial transfer finished the SECONDARY VOLUME become around (As it failed several number of time in the middle i have to keep on adding space ) 3.76TB can anybody tell why this has happened is this a bug else due to version difference ??
Primary Filer Version :- 8.1.1 7-Mode
Secondary Filer Version :- 8.0.1RC3 7-Mode.
Solved! SEE THE SOLUTION
2013-07-04 06:14 AM
Isn't your source volume deduped by any chance?
SnapVault as logical replication doesn't preserve deduplication - after the baseline is transferred you can run dedupe at the destination & *then* you should have roughly the same space occupied at both ends.
2013-07-04 06:36 AM
I have checked the source no deduplication was configured on any volume as i tried the command
zsswnetp5> sis status
No status entry found.
Now what i have to do
2013-07-04 06:54 AM
netp5> df -S
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
/vol/eccp1/ 2215643520 0 0% 0 0% 0 0%
/vol/eccp1_archlog/ 130340572 0 0% 0 0% 0 0%
This is what i am able to see for all of my volumes
2013-07-09 12:28 AM
Please find the output of the above commands and also please note SNAPVAULT is not active due to Space constrains i have unscheduled everything
vol status eccp1
Volume State Status Options
ssweccp1 online raid_dp, flex nosnap=on, create_ucode=on, convert_ucode=on,
Volume UUID: 0b68168c-7ef2-11e0-a4d2-00a0980f2390
Containing aggregate: 'fcaggr0'
vol options eccp1
nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=on, maxdirsize=167690, schedsnapname=ordinal,
fs_size_fixed=off, guarantee=volume, svo_enable=off, svo_checksum=off,
svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,
fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=on, dlog_hole_reserve=off,
bsswvltp1> vol status sv_eccp1
Volume State Status Options
sv_ssweccp1 online raid_dp, flex
Volume UUID: 05e89bf6-e51f-11e2-9d91-00a09827b8c2
Containing aggregate: 'ataggr0'
vol options sv_eccp1
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=41861, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off
2013-07-09 12:32 AM
Hi Nayab ,
See the snapvault volume has fractional reserve set to 100 . No need of this reserve remove it and then start vaulting
your primary qtree's to this destination .Hope this will resolve your issue .