Data Backup and Recovery

passing the vol clone name to a script

hmarko
5,235 Views

Hi.

Is it possiable to pass the name of the vol clone to a script as part as a cloneVol workflow ? (SC 4.1P2)

[Thu Aug  7 11:03:19 2014] INFO: STORAGE-02037: Creating clone [cl_config_vol1_20140807110304] of volume [vol1] based on

Snapshot copy [snap-daily_recent] finished successfully.

Thanks !

1 ACCEPTED SOLUTION

sivar
5,235 Views

Yes you can.

Except for the volume name you should be able to pass the other variables to a script.

Please refer

https://communities.netapp.com/thread/34086

cl_config_vol1_%SNAP_TIME

Thanks,
Siva Ramanathan

View solution in original post

5 REPLIES 5

sivar
5,236 Views

Yes you can.

Except for the volume name you should be able to pass the other variables to a script.

Please refer

https://communities.netapp.com/thread/34086

cl_config_vol1_%SNAP_TIME

Thanks,
Siva Ramanathan

hmarko
5,235 Views

Thanks Shiva.

This was very helpfull.

Now i have another chalenge.

My customer environment is cDOT, clones are created and should be mounted using NFS on a predeifned fstab entries.

i created a mount script to achive this by a mount cmd script that:

- unmount the created volume clone on the name space

- mount it with the with the path expected by the nfs client

- create the export policy and configure it on the volume (from some reason the sc built in functionality doesn't work)

This works great but i still have an issue with a requirement to create a clones based on the cloned volume (using a new sc config) which makes the naming of it very uncomfortable and complex.

Is there any way i can configure the clone name diffrently ?  i tried to vol rename but than it created an issue when running the umount workflow (volume is not identified any more by SC)

BTW, this is the mount script created:

#!/usr/bin/perl

my $volumes = $ENV{VOLUMES};

if (!$volumes) {

          print "Error: environment variable VOLUMES was not found, this script must run as part of snapcreator job\n";

          exit_with_error();

}

my $appsuffix = $ENV{APPSUFFIX};

if (!$appsuffix) {

          exit_with_error("Error: environment variable APPSUFFIX (custom) was not found, this script must run as part of snapcreator job");

}

my @nfshosts = split(/\:/,$ENV{NFSHOSTS});

if (!$ENV{NFSHOSTS}) {

          exit_with_error("Error: environment variable NFSHOSTS (custom) was not found, this script must run as part of snapcreator job");

}

if (!$ENV{SNAP_TIME}) {

          exit_with_error("Error: environment variable SNAP_TIME was not found, this script must run as part of snapcreator job");

}

if (!$ENV{CONFIG_NAME}) {

          exit_with_error("Error: environment variable CONFIG_NAME was not found, this script must run as part of snapcreator job");

}

@volspersvm = split(/\;/,$volumes);

foreach $volpersvm (@volspersvm) {

          ($svm, $vollist)= split (/\:/,$volpersvm);

          @vols = split (/\,/,$vollist);

          foreach $vol (@vols) {

                    my $clone ='cl_'.$ENV{CONFIG_NAME}.'_'.$vol.'_'.$ENV{SNAP_TIME};

                    my $exportpolicy = 'cl_'.$vol.$appsuffix;

                    my $junctionpath = '/vol/'.$exportpolicy;

                    $cmd = 'volume show -fields junction-path -junction-path  '.$junctionpath;

                    @out = run_ssh_cmd ($svm,$cmd,1);

                    if (grep {/(\S+)\s+$junctionpath/} @out) {

                              exit_with_error("Error: volume junction path $junctionpath is already used by another volume",1);

                    }

                    $cmd = 'vserver export-policy show -policyname  '.$exportpolicy;

                    @out = run_ssh_cmd ($svm,$cmd,1);

                    $exportpolicy{$svm}{$exportpolicy}=1 if (grep {/^Policy Name:\s+$exportpolicy/} @out);

          }

}

foreach $volpersvm (@volspersvm) {

          ($svm, $vollist)= split (/\:/,$volpersvm);

          @vols = split (/\,/,$vollist);

          foreach $vol (@vols) {

                    my $clone ='cl_'.$ENV{CONFIG_NAME}.'_'.$vol.'_'.$ENV{SNAP_TIME};

                    my $exportpolicy = 'cl_'.$vol.$appsuffix;

                    my $junctionpath = '/vol/'.$exportpolicy;

 

                    $cmd = 'volume unmount '.$clone;

                    run_ssh_cmd ($svm,$cmd);

                    $cmd = 'volume mount '.$clone.' '.$junctionpath;

                    run_ssh_cmd ($svm,$cmd);

                    if (not exists $exportpolicy{$svm}{$exportpolicy}) {

                              $cmd = 'vserver export-policy create '.$exportpolicy;

                              run_ssh_cmd ($svm,$cmd);

                    }

 

                    foreach $host (@nfshosts) {

                              $cmd = 'vserver export-policy rule create -policyname '.$exportpolicy.' -clientmatch '.$host.' -rorule sys -rwrule sys -superuser sys';

                              run_ssh_cmd ($svm,$cmd);

                    }

                    $cmd = 'volume modify '.$clone.' -policy '.$exportpolicy;

                    run_ssh_cmd ($svm,$cmd);

          }

}

sub exit_with_error {

          my $msg = $_[0];

          my $forceclonedelete = $_[1];

 

          print "$msg\n";

          if ($forceclonedelete) {

                    print "#################################### deleting clones created due to error ##############################\n";

                    foreach $volpersvm (@volspersvm) {

                              ($svm, $vollist)= split (/\:/,$volpersvm);

                              @vols = split (/\,/,$vollist);

                              foreach $vol (@vols) {

                                        my $clone ='cl_'.$ENV{CONFIG_NAME}.'_'.$vol.'_'.$ENV{SNAP_TIME};

                                        $cmd = 'volume unmount '.$clone,';volume offline '.$clone;

                                        run_ssh_cmd ($svm,$cmd);

                                        $cmd = 'set -confirmations off;volume delete '.$clone;

                                        run_ssh_cmd ($svm,$cmd);

                              }

                    }

          }

          exit 1;

}

sub run_ssh_cmd {

          $sshhost = $_[0];

          $cmd = $_[1];

          $notprint = $_[2];

 

          $base = '/usr/bin/ssh vsadmin@'.$sshhost.' ';

          print "\nrunning command: $cmd on host $sshhost\n" if !$notprint;

          my @out = `$base $cmd`;

          print "command output:\n @out" if grep {/[A-Za-z]/} @out and !$notprint;

          return @out;

}

sivar
5,235 Views

Thank you for sharing the great script!

I understand your pain with long volume names when clone of clones are involved.

SCF is hard coded to use these long names.

So, I don't have an easy answer for your.

Let me reach out to the product development team and see if a workaround can be given meanwhile.

hmarko
5,235 Views

Hi.

I worked out a solution for this but i hope it will be solved in future releases

Workflow looks like this:

mount

- SC creates the clone

- i'm renaming to clone name to what i want it to be and add export-policy, i also store in the volume comment field the original clone name (as part of MOUNT_CMD01)

umount

- i'm renaming the volume to the original clone name based on the name stored in the comment (as part of UMOUNT_CMD01) and removing the export-policy

- SC destory the clone

I had to change the script to powershell due to requirment from the customer, you can find it here:

https://www.dropbox.com/s/uq4pfjupdwokti4/SnapCreator.zip

sivar
5,235 Views

This is very creative use of volume comment field!

Thank you again for sharing the powershell script.

We are so happy to have you in the community, and we appreciate your feedback and inputs here.

I will surely pass this along to our TME and PM folks so they are aware of this use case, and can simplify the clone of clone workflow.

Have a Great Day.

Public