VMware Solutions Discussions

RCU utility with de-duplication ON increases cloning time ??

amit_abs_99
5,053 Views

Hi All,

I am using the RCU utility to clone VMs on an ESX server using the VMware vSphere client. I noticed that after turning de-duplication ON on the netapp datastore the cloning time went up 3 times (from 1.5 minutes to 3-5 minutes). Is this a known issue or am I doing something wrong? Please let me knwo if anyone has views on this.

Thanks

Amit

1 ACCEPTED SOLUTION

costea
5,053 Views

Unfortunately, the limit cannot be increased.  If your goal is speed, you should not enable the dedupe.

View solution in original post

8 REPLIES 8

costea
5,053 Views

When RCU hits the max shared blocks limit, it reverts to using an NDMP copy to create a duplicate of the source VMDK file.  The limit is 255 in 7.3 and is reached quicker if you have enabled deduplication on the volume.

amit_abs_99
5,053 Views

Thanks for the reply. So is there any solution if we are running 7.3 and de-dupe is on to make the cloning go faster ? is there a way to increase the 255 limit?

Thx

Amit

costea
5,054 Views

Unfortunately, the limit cannot be increased.  If your goal is speed, you should not enable the dedupe.

amit_abs_99
5,053 Views

Thanks for you help

dejanliuit
5,053 Views

I would like hear som some more details regarding the limitations.

  • You mention limit of 255 on Ontap7.3. Does that mean that it would be higher with Ontap 8.x? Is a 64bit aggregate a requirement then?
  • Would a workaround be to have several golden images to clone from?

forgette
5,053 Views

"You mention limit of 255 on Ontap7.3. Does that mean that it would be higher with Ontap 8.x?"
While we can't discuss futures in this forum, I can tell you the '255 limit' goes way up in the near future.  The better news is that it wont be an issue for file level flexclone operations anymore.  If you'f like specific information, please request a meeting with your NetApp SE.


"Is a 64bit aggregate a requirement then?"
No, this is unrelated.

"Would a workaround be to have several golden images to clone from?"
This wont help.  Here are some receipts based on the goal and use case.  Note that this becomes a single receipt for all uses cases in the near future (*see above).

  • If fast deploy times are the goal and you tend to create 1 clone at a time, disable dedup on the volume.

You'll still have good savings and shouldn't drop back to copy anymore.

  • If fast deploy time are the goal and you tend to create more than one VM at a time, leave dedup enabled on the volume.  This will give you offloaded vmdk cloning/copying (saving network and ESX server resources) and provide ongoing block deduplication.  The reason I suggest this is that the first vmdk will use an offloaded copy (ndmp), but subsequent vmdks will use file level flexclone.

  • If your goal is to get the most space savings, leave dedup enabled on the volume.  This will give you offloaded vmdk cloning/copying (saving network and ESX server resources) and provide ongoing block deduplication.

brendanheading
5,053 Views

How does the "max shared blocks" limit get reached and why does this make a clone impossible ?

I hope it's not as simple as having more than 255 similar blocks between two LUNs, otherwise 1MB worth of zeroed blocks inside a LUN would make a clone impossible ? This would basically imply that file clones don't play well with deduplication.

forgette
5,053 Views

As I mentioned in a branch of this thread:  While we can't discuss futures in this forum, I can tell you the '255 limit' goes way up in the near future.  The even better news is that it wont be an issues for file level flexclone anymore either.  If you'f like specific information, please request a meeting with your NetApp SE.

  • "How does the "max shared blocks" limit get reached ?"

The 'max shared' is a per block limit.  The deduplication and file level flexclone technologies use references to blocks from the same of different files/luns in order to eliminate block duplication.

  • "I hope it's not as simple as having more than 255 similar blocks between two LUNs, otherwise 1MB worth of zeroed blocks inside a LUN would make a clone impossible?"

The 'max shared' is a per block limit.  Once a block has been referenced 255 times, it must be copied before something else can reference it.  This does not make cloning impossible, but it can make it slower.  We (the developers of RCU) found that we could get a file which has some blocks at "max shared" duplicated quicker using ndmpcopy than waiting for the file level flexclone to duplicate a bunch of small block ranges.  * Please see my note above about how this changes in the near future.

  • "This would basically imply that file clones don't play well with deduplication."

On the contrary, these technologies complement each other very well.  The file level flexclone starts the volume off with very few duplicated block and the batch deduplication keeps it that way.

Public