2010-07-29 07:37 AM
I have been experimenting with the RCU, first 3.0 and now 3.1.
Environment is Vsphere 4.0 update 2, templates are on NFS.
With 3.0 I got mixed results. When I create a single rapid clone from a template it sometimes created the clone in a matter of 10 seconds at other times it takes much longer. When the Netapp create rapid clone gets to 15 % a second task starts, Netapp initial copy of image. When this completes the result is that the clone is doesn't appear to be a file clone as expected as the volume space is reduced byt teh size of the template it was cloned from.
I uninstalled and installed VSC 2.0 this morning as I thought that VS 1.0 and RCU 3.0 may not be compatible with Vsphere 4 update 2. Now that I have 3.1 installed the behavior is consistent. A single clone or the first clone when provisioning multiples uses space equal to the template size, additional clones appear to be file clones as they do not use additional space.
Is this how it's supposed to work?
If it is it doesn't seem too useful for provisoning single machines which is the usual scenario with servers?
2010-07-29 08:56 AM
Is this how it's supposed to work?
Yes, that is the expected behavior of RCU 3.0 and the provisioning and cloning feature in VSC 2.0. The challenge has to do with the maximum number of times a block can be shared in Data OnTap 7.3.x and 8.0.x. The first bit of good news is that this number gets much larger in later versions, so this becomes much less of an issue. There are also other improvements with regard to File Level Flexclone coming as well.
parkerdrace wrote:If it is it doesn't seem too useful for provisoning single machines which is the usual scenario with servers?
So, what benefits are you getting until then? First, as you've experienced, if none of the block are at the maximum number of times they can be shared, you end up with a really fast clone operation which consumes almost no additional capacity. Second, if you do hit the maximum, RCU "falls back" to a controller based copy. This is not only faster than the native clone operation, but it completely offloads the operation to the controller. This reduces the IO impact on the ESX hosts as well as the network. You can think of this as having VAAI copy offload - available way in advance (this has been a part of RCU since 2.0). Although the controller copy does consume capacity, this capacity can be reclaimed by running dedupe on the volume (which you can kick off from w/in the VSC provisioning and cloning right click menu).
Also, please take a look at the following discussion for more info.
2010-07-29 09:11 AM
I'd say your answer is correct but I am no longer seeing this behavior after doing a full Vsphere clone of the template and specifying a thick disk format as I stated in a reply.
2010-07-30 07:36 PM
Once dedup runs on the volume, your thick vmdk will reference many blocks that are now at the maximum reference count (because the free space in the vmdk file is all zeros). When you try to clone this vm (after dedup), you'll notice RCU creates an (offloaded) copy. Unfortunately, because it is thick, the operation will take longer and will consume more capacity (because all zeros are copied). My suggestion is to use thin vmdk files on NFS (the is the default anyway).
2010-07-31 04:35 AM
What I did was clone the existing template and specify tfile was thick. It didn't appear to actually convert the disk to thick. The resulting cloned vmdk was only slighty larger then the original (100 k). It didn't create a full zeroed thick disk.
The only difference I'm seeing is that now a single clone of the new template is a file clone not a full copy as it was before I cloned it to "thick" format. I tested after dedupe was run and it made no difference. When it does the "initial copy" step a file call 0000hd000-flat.vmdk is created that equals the thick size of the template, this happens in a matter of seconds. Before it would create a file using the template name which took several minutes.
This is different from what you confirmed to be the default normal behavior and preferred in my opinion.