2009-05-06 11:53 AM
I am attempting to backup a nightly SMVI snapshot to tape for longer term retention using NetBackup 22.214.171.124 and NDMP v4. Obviously, if I had a choice, I would recommend that the customer use SnapMirror and SnapVault, but they have no off-site NetApp storage systems, so they want to backup one SMVI snapshot per day and send those off-site for a certain period of time just in case. Here are some of the specifics:
The main problem I am having is that SMVI does not use a static name for snapshots. So, even if I call the snapshot NBU-Datastore-Backup in SMVI, SMVI actually names the snapshot smvi_NBU-Datastore-Backup_<date><time>_<GUID>. Even if I only take one snapshot per day, overwriting the previous days snapshot, the name is different since the date/time and GUID are appended to the snapshot name.
This makes backing up the snapshot via NDMP tricky because backup policies expect a fixed name and NDMP does not support wildcard characters. So, I can't say backup /vol/vol1/.snapshot/smvi_NBU-Datastore-Backup_*, for instance. This is what is causing me BIG headaches. Without being able to use wildcards, how can I do this easily?
Here are a few possibilities I thought of:
Any other suggestions? I can't believe it is this difficult to backup an SMVI snapshot to tape. I must be missing something.
2009-05-06 12:13 PM
You're not missing anything and this has been discussed before. I mentioned it some time back but I cannot find my original post.
I believe the ability to have a "predictable" snapshot name is a feature penned for version 2.0 of SMVI.
2009-05-06 01:01 PM
Well, I can get the the names of the snapshots from smvi backup list. The problem is I need a static name to use with NetBackup. I thought about renaming the snapshot before running the backup, running the backup, then renaming it to the original name using NetBackup ndmp_start and ndmp_end_notify scripts. The problem is, what happens if the backup or script aborts for whatever reason? Then, the snapshot may not get renamed back to its original name and that probably will screw up SMVI.
Then, I thought, why not clone the volume using the latest SMVI snapshot? That could work. In other words, when the backup kicks off, the ndmp_start_notify script gets called, that the script figures out the latest SMVI snapshot and clones the volume with that snapshot. Then, NetBackup backs up the clone, which is always named the same thing for each volume. Once the backup completes, the ndmp_end_notify script is called, and the clone is removed. If something the aborts in anywhere, the clone remains, but the next time the backup starts, the ndmp_start_notify script takes cares of removing the clone.
Comments??? Again, the problem is that NetBackup needs a static name to backup. And, since NDMP doesn't support wildcards, I either have to change the snapshot name, flexclone the volume with a name known by NetBackup, or update the NetBackup policy everytime with the name of the latest snapshot.
2009-05-06 01:13 PM
Flexclone of the volume(/snapshot) sounds like a good idea and might be safer than trying to play nice with SMVI. Let us know how this plays out. We don't currently use SMVI (VIBE works fine for now) but may (have to) one day in the future.
2009-05-07 04:29 PM
Actually I am doing exactly the same thing and decided to not interfer with SMVI at all . I thought if a backup fails , Storage admin has to clean up the clone and snapshot and found it easier to change the NBU backup selection list with lastest snapshot name.
Currently I am backing up all snapshots and observing a slow perfromance , if I backup volumes it backs up data with 70-90 MB/s and when NBU backs up snapshot network throughput is about 20-40 on average. I am using remote NDMP so data with travel back to media server and goes to disk storage.
I would be grateful if let me know what backup perfromance is ?
2009-05-07 05:05 PM
I will be performing some tests next week with the customer and will let you know the speeds I get. Here is what I am proposing, so let me know how close this is to what you are doing:
Is this the way you are doing it or are you doing something different? For insance, let's say that the latest SMVI snapshot is 'smvi_backup_Datastore-Daily_200905072330_<GUID>', then the ndmp_start_notify.cmd file ssh's over to the netbackup and runs 'vol clone create Datastore1_Backup -b Datastore1 smvi_backup_Datastore-Daily_200905072330_<GUID>', NetBackup backs up /vol/Datastore1_Backup since that is what is listed in the policies file list, and finally once the backup is complete, the ndmp_end_notify.cmd script deletes Datastore1_Backup via ssh.
What do you think??? Do you have a better way???
2009-05-07 05:38 PM
I think the FlexClone option is a good way to go. You can use 'snap list <volname>' if you want to hardcode a solution and simply grab the most recent snapshot of the volume (presuming you aren't doing scheduled snapshots in combination with the SMVI snapshots). With that snapshot, make a FlexClone with a consistent name for NDMP backups, and when you're done, destroy the FlexClone. That way you don't rename the SMVI snapshots and you can use consistent naming conventions in your NetBackup path.
Alternately, you can use SV-SMVI (in the NTN Communities section) to get the most recent SMVI backup information (snapshot name and path) and perform the FlexClone operation in your own script. Use the -list and either -path or -oneline and then take the snapshot name and volume name to make your FlexClone. If you know the volume name and don't want to parse the -path or -oneline output, just take the volume name you already know and use -path. Either way a FlexClone lets you make a consistent path if it has to be the same each time.
If you simply want to take the path and back it up, -list -path is a great way to do it so you don't need to make a FlexClone each time, but if you do that, echo that to a file and simply point your NDMP script to get the path from that file. Then you don't need FlexClone. Or, tweak your ndmp_start_notify.cmd script directly to use the output for the full path.
2009-05-07 05:46 PM
They way I did it was a bit different , I was thinking of creating flexclone but it becomes more problematic if something happends to backup script or server .
Snapshot for clone will stay there and SMVI can not clean up its own snapshots and worse case sernario all VMs will get frozen due to lack of space on volume ( because of growing snapshot).
My approach was to add the volume names in the backup selection. when scheduled script start , it will get the list of snapshot for each volume and based of date, will select the latest respective snapshot,modifies the backup selection and initiate a manual backup , for example :
for /vol/vmwprod1 volume , script will list the snap shots
%/used %/total date name
---------- ---------- ------------ --------
2% ( 2%) 1% ( 1%) May 07 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090507002200_cc237bd4-1f87-4283-8838-62188c5de55c_rnas7_prod1
4% ( 2%) 1% ( 1%) May 06 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090506002200_a66e4e08-48b5-4fd0-9e76-25de25233a3a_rnas7_prod1
7% ( 3%) 2% ( 1%) May 05 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090505002200_75778281-1128-4e56-8c49-2ad496c109ae_rnas7_prod1
10% ( 3%) 3% ( 1%) May 04 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090504002200_6825c5a3-f176-466a-ad56-edf06190f0e4_rnas7_prod1
11% ( 2%) 3% ( 0%) May 03 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090503002200_002c45f0-8a8c-481f-a150-41e9618ad1e4_rnas7_prod1
13% ( 2%) 4% ( 0%) May 02 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090502002200_a9f43022-1db4-47da-9265-88c5137cde3c_rnas7_prod1
59% (57%) 36% (33%) May 01 00:20 smvi_backup_DailyBackup_rnas7_prod1_20090501002200_f7e4e550-7206-4262-8dc1-733a54d89e95_rnas7_prod1
and based on the date (2009 05 07 ) will select:
script then does the following:
bpplinclude POLICY -modify IncludePath NewTargetPath
(you can have many volume in the backup selection and then initiate a backup )
bpbackup -i -p POLICY -s SCHEDULE
if snapshot is not there or there is any problem , it will send an email to backup admin.
It won't change any thing in the Filer just ssh to read the snap list.
So far it is working and except for network throughput every thing works .
I hope it helps and please let me know if you need more details.