We are using FCP LUNs and NFS used by vmware. We enabled thing provisioning and Deduplication on those volumes (Both NFS & FC).
For the past one week the deduplication is taking very long time to complete and is extending into business hours. Sometimes it is even overlapping with next days dedupe schedule. We observed that it is taking more time on NFS volumes compared to other volumes and also the amount of data scanned is more than what we see from df -g.
How to identify why the dedupe process is taking so much?
Why the scanned data is shown more than the data currently existing on the volume?
if your volume is quite full (>90%) you can check your fragmentation ratio by running "reallocate measure". We had a customer with a volume that was filled to 96% over a few months and he had a fragmentation ratio of 25 or something (1 is optimal)
Re: Deduplication processing more data than the data existing on the volume