ONTAP Discussions

Volume clone split very slow

nelsonmartins
4,724 Views

Hi,

 

    Last week on 03/11 I've started a volume clone split to after this, move the splited clone to another agggr. Done the split, recorded job id and after one week later it still not finished yet.

 

-Why is taking too long? Isn't this like a normal copy of blocks why it takes so mutch time?

-Can I move volume clone to another node (not in  the same HA), it will be to node 4 (cluster with 4 nodes) without finished this split operation?

 

Details:

STGFAS04::> volume clone split start -vserver san_svm04 -flexclone ESBL_CL2_vol
Warning: Are you sure you want to split clone volume ESBL_CL2_vol in Vserver san_svm04 ?
{y|n}: y
[Job 5351] Job is queued: Split ESBL_CL2_vol.

 

One hour later:

STGFAS04::> volume clone split show
                                Inodes              Blocks
                        --------------------- ---------------------
Vserver   FlexClone      Processed      Total    Scanned    Updated % Complete
--------- ------------- ---------- ---------- ---------- ---------- ----------
san_svm04 ESBL_CL2_vol          55      65562    1532838    1531276          0

 

One week later:

STGFAS04::> volume clone split show
                                Inodes              Blocks
                        --------------------- ---------------------
Vserver   FlexClone      Processed      Total    Scanned    Updated % Complete
--------- ------------- ---------- ---------- ---------- ---------- ----------
san_svm04 ESBL_CL2_vol         440      65562 1395338437 1217762917          0

 

It will never end......

 

This is a FAS8060 with latest version 8.3.1 GA !!!  how come?

How can I make this more FASTER??

 

Thanks in advance,

Nelson Martins

 

There's a nice workaround for this:

The problem with clone splitting is that it processes 1 inode at a time, in a serial way, so can take long time when you have a high count of inodes.

 

There is a cool workaround, that will have the same effect (writable copy of the volume), will consist of:

 

Instead of doing directly a clone of the volume and then splitting, do the following

  1. create the clone of the volume,
  2. then perform a “volume move” of the FlexClone this will create a full copy of all the data from the flexClone on a different aggregate, this is much much faster than splitting clone!
  3. Finally move back the volume to the desired aggregate   
  4. ClusterTest::*> vol move start -vserver test_2 -volume vol_win_dp_clone -destination-aggregate aggrFC_node3

      (volume move start)

     

    Warning: Volume will no longer be a clone volume after the move and any associated space efficiency savings will be lost. Do you want to proceed? {y|n}: y

 

2 REPLIES 2

JamesIlderton
4,707 Views

What version of ONTAP are you running?  Does the volume contain a LUN?  You may be running into this bug, check the link for a workaround:

http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=834509

nelsonmartins
4,673 Views

Hi James,

 

  Thanks for the reply, I'm runinng the latest version 8.3.1. 

There's a nice workaround for this:

The problem with clone splitting is that it processes 1 inode at a time, in a serial way, so can take long time when you have a high count of inodes.

 

There is a cool workaround, that will have the same effect (writable copy of the volume), will consist of:

 

Instead of doing directly a clone of the volume and then splitting, do the following

  1. create the clone of the volume,
  2. then perform a “volume move” of the FlexClone this will create a full copy of all the data from the flexClone on a different aggregate, this is much much faster than splitting clone!
  3. Finally move back the volume to the desired aggregate   
  4. ClusterTest::*> vol move start -vserver test_2 -volume vol_win_dp_clone -destination-aggregate aggrFC_node3

      (volume move start)

     

    Warning: Volume will no longer be a clone volume after the move and any associated space efficiency savings will be lost. Do you want to proceed? {y|n}: y
Public