Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Deduplication processing more data than the data existing on the volume
2010-10-12
04:34 AM
2,835 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We are using FCP LUNs and NFS used by vmware. We enabled thing provisioning and Deduplication on those volumes (Both NFS & FC).
For the past one week the deduplication is taking very long time to complete and is extending into business hours. Sometimes it is even overlapping with next days dedupe schedule. We observed that it is taking more time on NFS volumes compared to other volumes and also the amount of data scanned is more than what we see from df -g.
How to identify why the dedupe process is taking so much?
Why the scanned data is shown more than the data currently existing on the volume?
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
if your volume is quite full (>90%) you can check your fragmentation ratio by running "reallocate measure". We had a customer with a volume that was filled to 96% over a few months and he had a fragmentation ratio of 25 or something (1 is optimal)
-Michael
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Michael,
Thank you very much for your response.
All the nfs volumes are below 75% in usage .
