ONTAP Discussions

What is the right way to check if Dedup worked?

heightsnj
549 Views

I have a lot of volumes despite showed the success on the status by running following "volume efficiency" command, but if I look into further, quite some have not run for many days, and some of them only ran last for a minute or so which I know something was not  right...

> vol efficiency show -vserver vserver-name -volume * -fields last-op-begin,last-op-end, last-op-state

 

1.   What is the right way or best way to check if Dedup runs fine regularly as the way it is supposed to be?

2. Also, is there way I can list those "last-op-end" on a particular day or some days ago?

 

Thanks for your inputs in advance.

2 REPLIES 2

paul_stejskal
432 Views

Do you monitor the space savings? Also if you see the SIS log file is full EMS message that is a clue you may have a high change rate.

heightsnj
388 Views

Case 1, the total assigned volume size is only 1.15TB, but the sis-space-saved as shown below is 4.11TB, obviously, it is a accumulated saved size since day 1. Is there anyway to tell me out of the 1.15TB, how much being saved by dedup?

vol show -vserver vserver1 -volume app1 -fields sis-space-saved, sis-space-saved-percent, dedupe-space-saved, dedupe-space-saved-percent
vserver volume sis-space-saved sis-space-saved-percent dedupe-space-saved dedupe-space-saved-percent
--------- --------------- --------------- ----------------------- ------------------ --------------------------
vserver1 app1 4.11TB 84% 4.11TB 84%

 

Case 2: There are a number of volumes, dedup has not been run for hundreds hours, but the state still shows "success". I have to run "vol efficiency show volume-name -field progress" to find out what these volumes are. How can we troubleshoot volumes like those? Any better way to list those has not been run in last 48 hours?

 

Case 3:
We have some volumes for backup images, there should be relative large amount of duplicated data, but we don't see as much save as we expected. How can we check out if there is anything wrong?

 

Case 4: Would you recommend to run dedup , compress, or compaction on VMware Datastores(AFF or FAS )?


Case 5: From time to time, should we run dedup by rescaning old data?

 

A lot of questions here. Thanks!

Public