I am trying to understand how I can backup a SMHV backup/snapshot to tape.
We would like to configure a backup job on our backup server to always backup the volume with snapshot called (for example) "SMHV_Backup_00". However, I tried running the following PShell command just to see what it would do:
New-Backup -Server ClusterServer1 -Dataset Set06 -PolicyId Set06 -BackupName BackupThisUpToTape
It created a SMHV Snapshot which I see in the SMHV console called: "Backup Name: BackupThisUpToTape_02-03-2010_16.39.50"
And the name of the Volume Snapshot is totally randomized. I cannot just schedule a volume snapshot alone to tape since I specifically need the one SMHV used to properly backup the VMs using VSS.
How can I do this, or is there a better way that I am missing to backup VMs using SMHV to tape?
Also, does this process also hold true for other products, like SM for SQL/Exchange?
The snapshot name naming convention in SMHV is datasetName_hostname_timestamp and datasetName_hostname_timestamp_backup.
[each SMHV backup creates two snapshots including autorecovery].
You can create a post script in SMHV dataset policy to rename the snapshots mentioned earlier to the name of your choice
and then copy that to the tape.
For the post script, SMHV support passing up to three arguments to the scripts, following are the predefined variables which SMHV expects the user to pass. Administrator can pass one or all of these values in the post arguments edit box in the Policy Management wizard
1. $VMSnapshot 2. $SnapInfoName 3. $SnapInfoSnapshot
During the post policy execution phase SMHV will replace the $VMSnapshot variable with the snapshot name, $SnapInfoName with the time stamp of the backup, $SnapInfoSnapshot with the snapinfo snapshot name. You can access these variables from your scripts and do the necessary actions.
Can you provide an example? I am lost as to what I need to put in the post script box for "Path:" and "Arguments". Do I need to put this in a batch file of sorts?
Attached is the screen shot of the backup wizard -> backup options page. For the post script section i added a test script "TransferbackuptoTape.bat" file and also added
arguments for the post script as follows.
parameter0 $VMSnapshot parameter1 $SnapInfoName parameter2 $SnapInfoSnapshot
During the post policy execution phase SMHV will replace the $VMSnapshot variable with the snapshot name, $SnapInfoName with the time stamp of the backup,
$SnapInfoSnapshot with the snapinfo snapshot name.
Following is the contents of the "TransferbackuptoTape.bat" file to show how to access the variables inside the batch file.
echo %1 %2 %3 %4 > postscriptvariables.txt
During the post policy execution these variables will be written to the postscriptvariables.txt file.
The content of this file is shown below, in this example the snapshot name consist of
the following -> SMHV dataset name "31vm08", Hyper-V parent host name "SMHV-HOST-31" the last section of the snapshotname is the timestamp.
31vm08_SMHV-HOST-31_02-03-2010_23.23.07 02-03-2010_23.23.07 smhv_snapinfo_smhv-host-31_02-03-2010_23.23.07
out of this first variable is the snapshotname, time stamp of the backup, and last one snapinfo snapshot name.
You can now use these variables to write the custom script of our choice to copy the snapshots to TAPE.
Please let us know if you have any more questions.
Could be that maybe I am just thick headed . . .
Question #1: Your batch file included 4 paramaters ("echo %1 %2 %3 %4") when you only mentioned three ($VMSnapshot, $SnapInfoName, $SnapInfoSnapshot)?
Question #2: The output generated by your batch was as follows:
Now, you tell me that through this information I can parse the snapshot name. I am not looking to do SnapMirror to Tape or similar. I am looking to rename the Snapshot to be called for example, "SMHV_Snap01", and that path will always exist in my backup server path as the snapshot I want to backup. How do I rename the snapshot from "31vm08_SMHV-HOST-31_02-03-2010_23.23.07" to "Snap01". Is there a PowerShell command Snap-Rename?
For the question 1 you can ommitt the last parameter %4, it was just to demonstrate parameters to the batch file.
For the question 2, We don't have a powershell to rename snapshot, but instead you can use SnapDrive CLI - SDCLI tool.
The only catch is that tool requires the mountpoint for the rename operation.
sdcli snap rename -d m:\ -o 31vm08_SMHV-HOST-31_02-03-2010_23.23.07 -n Snap01
Will it possible for you to hardcode the mount point?.
Other option is to use sdcli hyperv list command to get the list of VMs on the host, and it will also list the mountpoints of the VHDs belonging to the VMs.
you can parse through and get the mount point for the VMs of your choice.
Discovered one issue in using the rename approch.
The issue is that SMHV records the snapshot name in the metadata[snapinfo] of the backup.
So if the snapshot name is renamed in the storage system volume after the backup is taken
then the restore operation for that backup will fail.
This is because SMHV checks to make sure that the snapshot name in the backup metadata does exists in the filer
and if the same name snapshot is not found the restore operation will fail.
One option is to rename it back to the original SMHV specific name while copying back from the tape to the volume.
Well now - that's interesting.
I have to ask - how do customers backup SQL & Exchange snapshots to tape? Does SM for SQL\Exchange work in the same way?
Do people just never backup to tape?
SMHV takes two snapshots on storage system as part of a backup. The second snapshot name is the first snapshot name with suffix "_backup". In order to access the VHD files in backup by mounting the snapshot, one should always use the snapshot with "_backup" suffix.
If I am backing up to tape these Snapshots - do I just need to backup the "_backup" volume snapshot? Do I need to backup the SnapInfo volume as well?
Also - you spoke about having to change the name back to the original one when transfering it back to disk. How should/could I know what the original name was? Assuming for example I would be restoring two different snapshots - one from a few days ago and another from a year ago.
You will need to backup "_backup" volume snapshot plus the base snapshot (the one without _backup suffix). Both of these snapshots are required to restore the VM itself.
SMHV also takes snapshot of SnapInfo LUN after backup. This snapshot captures the backup metadata. SMHV cannot restore a VM from backup if corresponding backup metadata is not available. So yes, it is good to back this up to tape as well.
This is going to sound a bit grough - but it seems almost impossible for someone to properly backup a SMHV backup to tape for offline storage. It sounds like I need to move these backups to SnapVault or the like.
Is there not a ready package of some sort with the entire script/batch from start to finish, with proper instructions? Is anything going to change in the next version of SMHV?
Bulk of the questions you had was around the renaming the SMHV backup to tape and restore it later to disk.
Can you copy snaps to tape as it is, meaning _backup snapshot, the base snapshot, and the snapinfo snapshot.[Avoiding the rename]
As shown earlier you have ways to figure out in your post script the name of these snaps using the variables.
I think using this way you can get the snaps to tape.
I was told/recommended by someone who specializes in backups that it is not particularly good to always backup an entire volume via NDMP, since you have to restore the entire LUN in total to restore an individual VHD file. You cannot just select the individual VHD file. That could sometimes be problematic since you may have a 1TB LUN with CSV VM VHDs and need to restore a 8GB VM VHD.
So, we decided to use the tips you mentioned, in conjunction with SnapDrive, to do the following:
REM Output the snapshot name to a text file
echo %1 >> C:\SMHV\SnapshotNames.txt
REM Call a script to connect the snapshot to the backup server
' Declare Variables
Dim oFSO, oShell, ForReading, sSnapshotNames, oTextFile, sSnap, sVol
' Set values
Set oFSO = CreateObject("Scripting.FileSystemObject")
Set oShell = CreateObject("WScript.Shell")
ForReading = 1
sSnapshotNames = "C:\SMHV\SnapshotNames.txt"
' Open the snapshot name file to get a list of the snapshots
Set oTextFile = oFSO.OpenTextFile(sSnapshotNames, ForReading)
' Get the last snapshot name in the file
Do Until oTextFile.AtEndOfStream
sSnap = RTrim(oTextFile.ReadLine)
' Find the server name to map to the Mount Point
sVol = Left(sSnap,InStr(sSnap,"_")-1)
' Disconnect the previous snapshot of that LUN from the backup server
oShell.Run "sdcli disk disconnect -m BackupServer -d C:\Snapshots\" & sVol & " -f",0,true
' Connect the newest snapshot of that LUN to the backup server
oShell.Run "sdcli disk connect -m BackupServer -d C:\Snapshots\" & sVol & _
" -dtype dedicated -p [FILERNAME]:/vol/" & sVol & "/.snapshot/" & sSnap & _
"_backup/Q_" & sVol & "/" & sVol & ".LUN -I " & _
' Garbage collection
Set oTextFile = Nothing
Set oFSO = Nothing
Set oShell = Nothing
that looks like a good solution if you're able to keep all of the VMs on a CSV running on a particular Hyper-V server in the cluster.
I believe you may have issues in the future, however, if VMs on a single CSV begin to shuffle between Hyper-V cluster nodes (due to performance management, outages etc), as SMHV will take separate snapshots for each Hyper-V server which is running VMs in the Dataset even if they're contained on the same CSV.
EG: (this is a greatly simplified view of my customer's environment)
* 4-node Hyper-V cluster.
* 1 x 4TB CSV LUN.
* 12 x VMs stored on the same CSV LUN.
* Each Hyper-V server in the cluster has 3 x VMs each from the CSV running on it.
* 1 x Dataset within SMHV to backup all VMs on this CSV once per day.
My customer only has a requirement to snapshot his VMs once per day.
Since SMHV takes 2 snapshots per Hyper-V server per Dataset (the base & then the "xxxxx_backup" snap), I will end up with a total of 8 snapshots for each daily SMHV backup. The problem with this situation is that there are only 3 VMs which are consistent in any one "xxxxxxxx_backup" snapshot - the other 9 VMs would be crash consistent. The other 9 VMs would be consistent within the other 3 x "xxxxxx_backup" snapshots.
This being the case you would then need to greatly complicate your post-script to determine which VMs were quiesced as part of that snapshot; only backup those VMs from your backup server; mount the snapshot for the next Hyper-V server in this Dataset and repeat the process.
I would be greatly interested if anyone has a solution to this conundrum, as I'm trying to find a way to SnapVault this environment for long-term retention. Currently the easiest option for long-term management appears to be installing OSSV on all of the VMs and have them each individually replicate to the DR site and manage with Protection Manager.
I am no guru so excuse me for the following comment, if any part of it (or all) sound stupid.
First of all, my inclination tells me anyhow that if you have a cluster with many hosts, you want to give each host the ownership of the CSV, thereby forcing yourself to create seperate CSVs for each host. I thought that this was a best practice but I don't remember now where I got that from, if it is indeed fact. Even if not a best practice, it would immediately solve your problem. Are you locked in to a single CSV? Aside from this issue, having smaller chunks of data allows you to backup and restore easier without impacting a lot of things. We have had a few times now CSV disks go down and have to restore them from scratch or from a snapshot, so having fewer eggs in our basket helped.
Assuming you are locked into a single CSV-
Can't you select the VMs you want to backup anyhow without selecting all the VMs on the volume, or does selecting a single VM automatically (in practice) force you to back them all up? I know in the console you are not forced to select automatically all the VMs in a single CSV, like you are forced in SMSQL for example.
Even if you are forced, you can create three jobs (for example), which will create 6 snapshots, but you know (in your head) which snapshots contain which good copy of the VM. Then you can manually select the good VM backup from each CSV snapshot mounted.
thanks for your response.
The whole purpose of CSVs is to allow multiple Hyper-V servers to store & access VMs on the same LUN simultaneously (similar to a VMFS datastore in ESX), so following a guideline of only allowing a single Hyper-V server to store VMs on a specific CSV kinda defeats the purpose.
There is no mention in TR-3805 (SMHV best practices) of any best practices in regard to use of CSVs together with SMHV, but you may be thinking of the traditional shared storage LUNs which were previously used in Hyper-V clusters - the best practice for these types of LUNs is to only have one VM per LUN.
My customer currently has 180 VMs in their Hyper-V environment & growing, so going down this road would make the storage a nightmare to manage as well as negate all of the dedup benefits which we're currently achieving.
The example I gave was greatly simplified - the real environment contains:
- 8 x Hyper-V servers in a failover cluster.
- 24 x 2TB CSV LUNs located in separate qtrees within 12 x FlexVols.
(The 2 x 2TB LUNs per vol is a workaround for a UCS bug in their FCoE firmware - UCS wouldn't recognise any LUN over 2TB over FCoE with Hyper-V)
(The 4TB per FlexVol allows us to maximise the dedup on their FAS3140)
- 180+ x VMs which are grouped on the storage according to their data type for maximum dedup benefits.
The SMHV datasets are broken down to include all VMs residing on the 2 x LUNs in a particular FlexVol. (eg all VMs on CSV LUNs 1 & 2 which reside in FlexVol # 1 are configured in dataset # 1 etc)
If I were to create separate SMHV datasets based on which Hyper-V server the VMs contained in a CSV were running on:
Hmmmm... the more I think about it the more SMHV doesn't appear ready for enterprise environments (but then that could be said for Hyper-V in general!).
Again, excuse the spew of stupidity that may come out of my mouth, but the following I assume to be true.
CSVs at one time can only have a single Cluster owner. That means, that with 4 servers in a config, one server will ultimately own that disk. When you will be backing up the CSV (taking a snapshot), that disk will now be "rotated" amongst servers for each VM that is running a particular server. As discussed in TR-3702, page 40:
Because all backups will coordinate and occur on the CSV owner node, NetApp recommends that administrators put some additional thought into how VMs and CSVs are provisioned among all the Hyper-V servers. Some environments may consider provisioning a greater number of smaller CSVs OR a smaller number of larger CSVs to their Hyper-V Cluster. While this is the administrators, choice, Customers who consider the latter choice, should only do so when performance of the underlying disk from the NetApp storage system can support a larger number of VMs. Regardless of choice with sizing and provisioning CSVs within the Hyper-V Cluster, it is NetApp’s recommendation to balance the VMs across the CSVs that are deployed as best possible. By organizing the VMs deployed to the CSVs, you balance the backup operations that occur at similar times across all CSVs.
That tells me that having some CSV ain't so bad.
Regarding your point about the benefit of CSVs - I agree and disagree. CSVs allow you to yes, host many VMs running on different servers while on the same storage, but in my opinion, that isn't so much the added value. Since before R2, you could have had a single disk on each server running VMs. The benefit is now, you can have multiple servers, all which access the same disks, so if you need to failover, you can migrate a VM to another host in a jippy, without having to export/import or the like. Another added benefit is you don't need a Volume per server now, but officially you can use a single disk.
I also know, from our experience, and from the technet forums, that CSV corruption happens not so rarely, so there is something to seperating VMs on a few CSV disks.
To be honest, we lost interest in the Dynamic Migrating capabilities of SCVMM, since it uses a computation which is very premature and silly. At least we thought so. When we take a server down, at least when there is no failure, we manually Live Migrate our VMs over to other systems. And even if a server goes down hard (which they have by us), the worst case that happens is instead of a Live Migration we get a Quick Migration. Our servers have enough resources and aren't overloaded so much that we would really benefit greatly from dynamic live migration, but that is us.
But I do agree with you about SMHV and the need for creating multiple datasets and the annoyance of not having VMs sorted, amongst some of the other strange things the software does. We created this script (described above) to overcome a shortcoming (in our opinion a major one) in the product, which was funny in our eyes since the feature basically already existed in all of the other SM products.
Did you ever think to check the SMHV logs to see if it writes there which snapshot is "consistent" or similar - maybe using a script with text parsing you could tell that way?
Sorry I couldn't help. 😞