I am using the RCU utility to clone VMs on an ESX server using the VMware vSphere client. I noticed that after turning de-duplication ON on the netapp datastore the cloning time went up 3 times (from 1.5 minutes to 3-5 minutes). Is this a known issue or am I doing something wrong? Please let me knwo if anyone has views on this.
Solved! See The Solution
When RCU hits the max shared blocks limit, it reverts to using an NDMP copy to create a duplicate of the source VMDK file. The limit is 255 in 7.3 and is reached quicker if you have enabled deduplication on the volume.
Thanks for the reply. So is there any solution if we are running 7.3 and de-dupe is on to make the cloning go faster ? is there a way to increase the 255 limit?
I would like hear som some more details regarding the limitations.
"You mention limit of 255 on Ontap7.3. Does that mean that it would be higher with Ontap 8.x?"
While we can't discuss futures in this forum, I can tell you the '255 limit' goes way up in the near future. The better news is that it wont be an issue for file level flexclone operations anymore. If you'f like specific information, please request a meeting with your NetApp SE.
"Is a 64bit aggregate a requirement then?"
No, this is unrelated.
"Would a workaround be to have several golden images to clone from?"
This wont help. Here are some receipts based on the goal and use case. Note that this becomes a single receipt for all uses cases in the near future (*see above).
You'll still have good savings and shouldn't drop back to copy anymore.
How does the "max shared blocks" limit get reached and why does this make a clone impossible ?
I hope it's not as simple as having more than 255 similar blocks between two LUNs, otherwise 1MB worth of zeroed blocks inside a LUN would make a clone impossible ? This would basically imply that file clones don't play well with deduplication.
As I mentioned in a branch of this thread: While we can't discuss futures in this forum, I can tell you the '255 limit' goes way up in the near future. The even better news is that it wont be an issues for file level flexclone anymore either. If you'f like specific information, please request a meeting with your NetApp SE.
The 'max shared' is a per block limit. The deduplication and file level flexclone technologies use references to blocks from the same of different files/luns in order to eliminate block duplication.
The 'max shared' is a per block limit. Once a block has been referenced 255 times, it must be copied before something else can reference it. This does not make cloning impossible, but it can make it slower. We (the developers of RCU) found that we could get a file which has some blocks at "max shared" duplicated quicker using ndmpcopy than waiting for the file level flexclone to duplicate a bunch of small block ranges. * Please see my note above about how this changes in the near future.
On the contrary, these technologies complement each other very well. The file level flexclone starts the volume off with very few duplicated block and the batch deduplication keeps it that way.