Active IQ Unified Manager Discussions

OpsMgr 3.8 and "over deduplicated" alerts

seno
5,835 Views

Hi,

I am interpreting these OpsMgr 3.8 "over deduplicated" alerts to mean that were a-sis disabled, the data in the volume could potentially grow beyond the current size of the volume.

Would this be correct?

7 REPLIES 7

jasonczerak
5,835 Views

Bump.   looking for an answer to this as well.

seno
5,835 Views

Hi,

I had to speak with a NetApp vm engineer about this, but did get an answer. He said it just meant you are getting phenominal space saving. He said it is no cause for alarm and that NetApp is currently working on removing this alarm from DFM. His suggestion is to bump up the threshold until you aren't getting alarms anymore and wait for the patch.

madden
5,835 Views

Here's a description I found elsewhere:

Volume over deduplication can be explained with this example:

If a volume of size 100 GB stores 150 GB of logical data (say 80 GB used + 70 GB saved space), then the volume is said to be 150% over deduplicated.

It is calculated as follows:

Over Deduplication = 100 * Logical data in the volume / Total vol size
                            = 100 * (80 + 70) / 100
                            = 150 %

Here logical data is the data in the volume after undeduplication.

Over deduplication can be useful in scenarios where the amount of deduplication happening in a volume needs to be monitored. For example in a SnapVault relationship, the data in the volume gets undeduplicated when sent on the wire. Hence necessary events are generated if the over deduplication percentage of a volume crosses the over deduplication thresholds.


By default volNearlyOverDeduplicatedThreshold and volOverDeduplicatedThreshold are set to 140 and 150 respectively.

To reduce hitting these events, set the over deduplication thresholds to a higher value.

# dfm options set volNearlyOverDeduplicatedThreshold=<higher value>
# dfm options set volOverDeduplicatedThreshold=<higher value>

MUTHUSONA
5,835 Views

Hi madden,

I am almost there with ur answer. Just few queries

1) wat is the max a volume with 100GB can hold (both physical and logical)

"If a volume of size 100 GB stores 150 GB of logical data (say 80 GB used + 70 GB saved space), then the volume is said to be 150% over deduplicated"

2) from the above, can i have 70GB physical new data ( replacing saved space 70GB). Is it not 150GB volume then

3) how do we get 70GB saved space ( shdnt be 20GB in a 100GB voume). please can u explain with calculation

Muthu

Here logical data is the data in the volume after undeduplication.

madden
5,835 Views

Answers:

1) The max a 100 GB volume can hold would theorically be around 25,600 GB.  This because the max times a block can be shared is 256, so 100GB x 256 = 25,600 GB.

2) The volume has 80GB physical used, so you could add 20GB physical to it.  If the 20GB data you added was 100% duplicate with existing data, after the dedupe job finishes you would again have 20GB free. 

3) The 70 GB saved is duplicate data.  So if you were to undo deduplication (sis undo command) you would need 150 GB physical to hold the data in the volume.

Hope that helps.

Note: Some space is required for metadata (maybe 5% off the top of my head) so you need to plan for some freespace and metadata consumed space.  So for all calcs above you need to reduce slightly, but for clarity I didn't include it above.

seno
5,835 Views

The dedupe metadata is stored at the aggregate level now, correct?  No longer in the volume being deduped.

madden
5,835 Views

The info I gave was for 8.0 and earlier.  In 8.1 the max references count goes up to 32k (not 256) and the dedupe metadata requirements also change slightly.  Check TR-3958 for more.

Public