I currently have two ESX 3.5 hosts connected to a 2050 running 184.108.40.206 through iscsi. I recently had to create a brand new lun and migrate all of my VM's to it becasue the vmfs file system became corrupted. On my old lun dedup was saving me 65% of my space but on the new one I am only saving 5%. It is a 1 TB volume with space settings set to volume, fractional space reservation set to 0% and snapshot reserve set to 0%. On the Lun space reservation is turned off. I have set dedup to run between 3 and 6 am everyday and it now seems to take 6 min to run every night. I can't figure out why this would have changed. Any help would be apreciated.
Yes you should of course see 65% savings on the new LUN just like you did on the old LUN. Your LUN settings seem correct. Question, did you run the "sis on" command on the volume before you did the migration, or run "sis start -s" on the volume after the migration? You'll have to have done one or the other to get the full dedupe savings.
I'm interested in hearing the official answer on this too... in early asis presentations I remember seeing something about fixed 4k blocks but could be wrong...then also heard misalignment didn't hurt deduplication but haven't heard confirmation from an engineering resource.
I just think any statement that categorically states
its not good to set it to none will not be right 100% of the time. it does work for us, in fact I love it! :-0
Yes, yes - point taken.
What I was trying to stress is the fact that 100% fractional reserve kicks in when vol guarantee is set to none & there is no way to dodge this. Setting FR to 0% (or some small number when SnapManager products are in place) is now a part of official NetApp best practice, so you see where I am coming from...
Having said that, I fully appreciate 100% FR may still be a good thing for some people in some environments. And I also admit FR in itself is still a very hairy topic! (see my post about FR & reallocation: http://communities.netapp.com/thread/4431?tstart=0)
NB this is all irrelevant if there are no LUNs in the volume in question.
This is where I am coming from, using purely common sense:
Scenario A, three identical VMs, but no identical blocks within each VM, all properly aligned, so NTFS blocks are mapped identically to WAFL blocks.
Scenario B, same VMs, vm1 properly aligned, vm2 has 1k offset & vm2 has 2k offset, so NTFS blocks from each VM are mapped in a different way to WAFL blocks.
Scenario A should give 3:1 de-dupe savings, whilst scenario B will give no savings, as NTFS blocks will be "chopped" from a different starting point in the case of each VM (unless purely by chance some aligned blocks will get identical 'twins' from miss-aligned ones).
Interesting. I just started reading about alignment and how it affects performance. Can someone point me to an easy tutorial on how to check alignment and how to fix it if it is off? I had someone tell me that when I create the lun and choose vmware as the type that will take care of the WAFL to vmfs alighnment.