Subscribe

Deduplication Performance Impact?

What kind of performance impact does deduplication have on performance? (both during the scheduled deduplication process and also "business hours")

Re: Deduplication Performance Impact?

Andrew,

There are 3 key factors that effect the performance impact of dedupe-

1) The NetApp FAS or V-Series model

2) The amount of duplicate data in the volume

3) Other processes the system is servicing during the dedupe process

If we look at a typical scenario (impossible, I know, but bear with me) - lets say we have a FAS3070, a 1TB volume with 5% duplicate data, and the system is fairly quiet.  This would be a typical setting for running dedupe overnight on a regular basis.  I would expect this system to complete dedupe in less than an hour and have no impact on workloads (since there aren't any running).

On the other hand, if we have a FAS2050, 90% duplicate data, and the system is running at peak load - the dedupe process will take many hours and you will likely see some performance degradation resulting from dedupe.

The problem is that there are too many variables for us to give an exact number.  Instead, we recommend two things:

1) If your application or system is extremely performance-sensitive, don't run dedupe

2) If you are concerned that dedupe will create an excessive performance penalty, run a POC first

Also, remember that your can easily turn off dedupe, and/or "undo" dedupe if you don't like the results you get.

Hope that helps,

Larry

Re: Deduplication Performance Impact?

Thanks -- very helpful.

Would it be safe to say that performance on a deduplicated volume should be a non-issue? (i.e. production usage of a volume with deduplicated data)

(I've got some experience here but am betting you'll be able to provide a more comprehensive answer. ).

Re: Deduplication Performance Impact?

Andrew,

In general, the answer is yes - a volume that has been deduped should not show any appreciable read performance degradation.  Since WAFL is a random-layout filesystem, deduplication merely re-randomizes the data blocks.  Also remember that NetApp dedupe does not use containers or lookup tables to rehydrate data, we just redirect the existing block pointer metadata.  Having said that - I have seen a few cases where read performance degraded, but this is unusual and not predictable - it all depends on the block layout pattern and the pattern of read requests.  And as I mentioned earlier - you can always undo dedupe if you don't like the results.

Another point worth mentioning is using dedupe together with the Performance Acceleration Module (PAM.)  PAM is dedupe-aware so you can actually improve read performance after dedupe with this combination.  We've done some tests and I think published them that show dramatic improvement in VDI "boot storm" response times as a result of dedupe and PAM.

What has your experience been?

Larry

Re: Deduplication Performance Impact?

Hi Larry,

I got these number stuck in my mind - 0% performance degradation for writes & 7% for reads (de-duped volume vs. the original one).

Where do they came from? I've heard this from one of NetApp US folks during their visit in UK about 2 (?) years ago (might that be you by any chance? ;-)

So the question is: are these numbers (the one for reads in particular) anywhere close to today's A-SIS reality?

Regards,

Radek

Re: Deduplication Performance Impact?

Hey Radek - I think your numbers are actually backwards.  You will see a small increase in CPU on writes but you shouldn't see an increase in reads in most instances.  The reason for the increases on writes is because when a block is written it is checked ("finger printed") to see if an identical block has been written already and it will be eligible for de-dupe on the next pass.

Check question 15 on this de-dupe faq, very good read:

http://communities.netapp.com/docs/DOC-1701

As for your theory vs. reality question:  I have numerous customers running de-dupe in many different forms (NFS shares for VMWare, LUNS for VMWare, CIFS, etc.).  On the whole, they couldn't be happier with it.  You want to watch yourself because of the additional CPU overhead on a filer that is already being hit hard for some reason because the additional CPU might put it over the edge.  But, on the flip side it is also very easy to turn off the finger print analysis if you suspect this to be contributing to a greater problem.

Aaron

Re: Deduplication Performance Impact?

Hi Aaron,

So we have a proper discussion (at last)!

Let me actually question what Antoni wrote in his document, as my understanding of A-SIS is that there should be no write performance penalty. The reason for this is that A-SIS is a post-process de-duplication, so we are writing blocks which will be processed at a scheduled time & are not processed while they are being written to a to a volume.

Read penalty is definitely a hairier topic & I would really appreciate if Larry comes back to us at shed some additional light on it.

Regards,

Radek

Re: Deduplication Performance Impact?

Hi Radek-

Lets break down whats happening during the pre- and post- deduplication stages, this should help explain performance impact.

Remember that NetApp deduplication on FAS and V-Series systems involves 2 steps - 1) enable dedupe on a volume (sis on) then at some point 2) dedupe the data in that volume (sis start)

When you 'sis on' a volume, the behavior of that volume changes.  Every time it notices a block write request coming in, the sis process makes a call to Data ONTAP to get a copy of the fingerprint for that block so that it can store this fingerprint in its catalog file.  This request interrupts the write string and results in a 7% performance penalty for all writes into any volume with sis enabled.  We know its 7% because we measured it in our labs and lab machines don't lie - however every customer I've spoken to says they can't tell the difference, I guess we humans aren't quite so precise.

Now, at some point you'll want to dedupe the volume using the 'sis start' command.  As sis goes through the process of comparing fingerprints, validating data, and dedupe'ing blocks that pass the validation phase - in the end all we are really doing is adjusting some inode metadata to say "hey remember that data that used to be here, well its over there now."  Nothing about the basic data structure of the WAFL file system has changed, except you are traversing a different path in the file structure to get to your desired data block.  Like going the the grocery store, you can take Elm Street or Oak Street and depending on traffic either way might get you there faster.

Thats why NetApp dedupe *usually* has no perceivable impact on read performance - all we've done is redirect some block pointers.  Accessing your data might go a little faster, a little slower, or more likely not change at all - it all depends on the pattern of the file system data structure and the pattern of requests coming from the application.

Larry

Re: Deduplication Performance Impact?

Hi Larry,

Thanks a million for your reply!

Although you proved me wrong ;-) I really appreciate you refreshed my memory & associated this magic 7% correctly.

I wasn't aware (or I didn't remember) that fingerprints are collected upfront, whilst writes are coming in. Does it mean that (at leat in theory) these new blocks will be processed faster when doing actual de-dupe vs. first run on a 'fresh' volume with data without any de-dupe history?

Regards,
Radek

Re: Deduplication Performance Impact?

Precisely -- that's why you have to do a "sis start -s" after you enable dedup on a volume with existing data (so all those fingerprints can get generated in the first pass -- the dedup then happens in a second pass and all later passes).

For my experience -- basically the same as Aaron's -- multiple (very happy) customers using it with no perceived performance impact.

Very good stuff overall....I especially LOVE it in VMware environments (after a robust NFS implementation, dedup is probably what I miss the most when working with VMWare on other arrays).