VMware Solutions Discussions

de-duplication on not aligned vmdk's

hmarko
3,823 Views

Hi.

Will performance get worse if i will run de-duplication on volumes contains datastores which the vmdk's on them are not alligned ?

Environment is FAS6070,7.3.2,FCP,Vol Size 8T with 16 luns containing around 400-500 VMs which most of them are not aligned on ESX 3.5servers.

Thanks !

5 REPLIES 5

radek_kubka
3,823 Views

Hi,

Two things:

- Misaligned VMDKs do perform badly anyway & de-duplication may introduce some performance penalty on top of that (not associated with misalignment)

- Misalignment quite likely will cause de-duplication savings to be smaller than expected; see this discussion:

http://communities.netapp.com/message/14675#14675

Interestingly enough, if you add PAM cards into the mix, then these two issues are connected together, as low de-dupe ratio means de-dupe aware PAM cards can serve less reads from memory, rather than disks.

Regards,
Radek

hmarko
3,823 Views

Hi.

Does the dedup aware caching introduced in 7.3 will have positive efect on performance in this case.

Can you estimate the performance degradation from using both in non aligned environment ?

Thanks

radek_kubka
3,823 Views

I reckon de-dupe aware caching is 100% applicable here, hence higher de-dupe ratio may help with performance.

But again - misalignment impact the performance in first place, then on top of that de-dupe ratio quite likely will be low, and then de-dupe in itself isn't performance neutral (have a look at this: http://communities.netapp.com/thread/4351?tstart=0)

I don't think anyone will hazard a guess and give you hard numbers, but personally I'd steer clear of de-dupe before your VMDKs get properly aligned.

Regards,

Radek

hmarko
3,823 Views

The reason I need the saving are to be able to do the alignment.

Currently no space is available to complete the alignment so we thought of using de-dup to get this space.

I'm not seek for performance to get better using dedup, just to validate it will not get worse.

radek_kubka
3,823 Views

I undestand that this is an egg & chicken dilema.

There will be some performance impact, but you may try stage de-duping of only certain volumes, then do some alignment, then de-dupe some more volumes, etc.

This is what the TR-3505 says about performance:

Write Performance to a Deduplicated Volume

The impact of deduplication on the write performance of a system is a function of the hardware platform that is being used, as well as the amount of load that is placed on the system.
If the load on a system is low—that is, for systems in which the CPU utilization is around 50% or lower—there is a negligible difference in performance when writing data to a deduplicated volume, and there is no noticeable impact on other applications running on the system. On heavily used systems, however, where the system is nearly saturated with the amount of load on it, the impact on write performance can be expected to be around 15% for most NetApp systems. The performance impact is more noticeable on higher-end systems than on lower-end systems. On the FAS6080 system, this performance impact can be as much as 35%. The higher degradation is usually experienced in association with random writes.

Read Performance from a Deduplicated Volume
When data is read from a deduplication-enabled volume, the impact on the read performance varies depending on the difference between the deduplicated block layout compared to the original block layout. There is minimal impact on random reads.
Because deduplication alters the data layout on the disk, it can affect the performance of sequential read applications such as dump source, qtree SnapMirror or SnapVault source, SnapVault restore, and other sequential read-heavy applications. This impact is more noticeable in Data ONTAP releases earlier than Data ONTAP 7.2.6 and Data ONTAP 7.3.1 with data sets that contain blocks with repeating patterns (such as applications that preinitialize data blocks to a value of zero). Data ONTAP 7.2.6 and Data ONTAP 7.3.1 have specific optimizations, referred to as intelligent cache, that improve the performance of these workloads to be close to the performance of nondeduplicated data sets. This is useful in many scenarios, and especially in virtualized environments. In addition, the Performance Acceleration Modules (PAM and PAM II) are also deduplication aware, and they use intelligent caching.

Public