ONTAP Discussions

SnapProtect questions

peluso
43,062 Views

Hi all,

You may have seen the latest announcement for our new product, SnapProtect.  Please ask your questions - we have technical expertise ready to provide answers and point to useful information.  I will post a few questions here as well to break the ice and open up the conversation.  For more information about the product, start with our product pages. http://www.netapp.com/us/products/protection-software/snapprotect.html

Best,

Terri

Thanks so much!
Terri Peluso
Senior Community Program Manager
177 REPLIES 177

GLIDIC_ANTHONY
4,866 Views

Hello,

Ok i will check that.

But i'm ok the backup copy only copy the used space, my issue is the time it take to copy that space. And that's why i think the backup copy chek all the provizioned volume and not only the used space.

--

Envoyé depuis mon Nokia N9

Le 11/04/12 15:14 bwood a écrit :

<https://communities.netapp.com/index.jspa>

Re: SnapProtect questions

created by bwood<https://communities.netapp.com/people/bwood> in NetApp Integrated Data Protection - View the full discussion<https://communities.netapp.com/message/78684#78684>

If this is NetApp SnapProtect then I think you should be calling NetApp for support. However, I just tried this in my lab and the backup only copied the used data (not the entire provisioned amount).

Reply to this message by replying to this email -or- go to the message on NetApp Community<https://communities.netapp.com/message/78684#78684>

Start a new discussion in NetApp Integrated Data Protection by email<mailto:discussions-community-products_and_solutions-netapp_integrated_data_protection@communities.netapp.com> or at NetApp Community<https://communities.netapp.com/choose-container.jspa?contentType=1&containerType=14&container=2020>

anunci1111
5,205 Views

Snap protect is positioned to be our primary tool for backing up solutions running on VMware, Citrix, and enterprise-wide backup opportunities.  This includes vCloud Director, its database(s), and all underlying tenants.  Cataloguing, cascade and fan-out Vault & Mirror relationships, in a single GUI.  This is it.  If your shop is new to NetApp, or looking to beef-up your current backup ways-and-means, you can’t pass up looking at SnapProtect.  Oh, and it will do all the old traditional physical stuff as well.  Oh, and V-series is supported in front of 3rd party arrays.  I’m not kidding, guys.  This is the all-in-one you’ve been asking for.

parrizas
4,865 Views

Hello,

Is it possible to get an evaluation license for NetApp SnapProtect?

Thanks,

Alfonso

hallc
4,865 Views

Hi Alfonso,

If you are a NTAP partner it depends on your geo as to the process to obtain an NFR license.  NFR licenses are available for evaluation by partners.

If you are a potential SnapProtect customer, please contact your local NTAP representative to arrange a Proof of Concept evaluation.

Feel free to contact me directly for assistance.

Regards,

Chris

chris.hall@netapp.com

Backup/Recovery Marketing Manager, SnapProtect

cgeck0000
4,864 Views

Reading through the NetApp documentation I see that SnapProtect can protect virtual machines that span multiple datastores, but I do not see anything stating is SnapProtect can protect virtual machines that span multiple volumes or LUNs.  Is this possible if the multiple datastores are on multiple volumes, LUNs, or even different filers?

bwood
4,864 Views

Yes it can.

cgeck0000
4,864 Views

Ok, that is what I was thinking.  I am having an issue with indexing VMs that have multiple VMDKs on different datastores separated into different NFS presented volumes.

The VMware snapshot takes place followed by the NetApp snapshot on the volumes and then the VMware snapshot is removed properly, but when SnapProtect tries to mount the VM to index it and perform a stream to a remote location the following error occurs, 3828 13d4 06/05 11:29:23 487 vsbkp::run() - SendArchiveFileInfo Failed for VM

When we place all the VMDKs into a single datastore the entire process works correctly. Is there a specific option I am missing somewhere?  Thank you for any direction you can provide.

bwood
4,958 Views

Which service pack are you on?  You will need SP5 for this.

cgeck0000
4,958 Views

Yes we are on the latest SP as this is a new implementation. That is why I am wondering if I missed something within all the documentation, a "note" or something like that.

bwood
4,958 Views

Windows VMs?   VMware?   Should work. Linux VMs are a bit different.

anders_hansen
4,918 Views

We have a scenario where we have our VMware VM's on a NFS datastore.

SQL: Is it possible to do a single DB restore if the DB-volume is a VMDK on a NFS-datastore with snapprotect?

Sharepoint: Is it possible to do a single item restore if the volume is a VMDK on a NFS-datastore with snapprotect?

Exchange: Is it possible to do a single DB restore if the DB-volume is a VMDK on a NFS-datastore with snapprotect? And is it possible to do a single mailbox restore if the DB-volume is a VMDK on a NFS-datastore with snapprotect?

Thank you

bwood
4,958 Views

You will not be able to use the actual database agents for these since they are on VMDKs.   You would only be able to protect the VMs using the Virtual Server agent.  The Virtual Server agent has the ability to quiesce the SQL and Exchange databases as part of the VM backups. 

Your restore options would be:

- restore entire VM

- restore VMDK

- restore flat database files / directories

nick_walford
4,917 Views

I have reasonable knowledge of both Commvault and NetApp technologies, so I thought I would really "click" with this product. However, the more I work with it the less I like it, and it just doesn't seem to scale-up.

I think it comes down to a fundamental difference between CommVault's view that its's trying to protect VM's vs NetApp Snapshots being volume based entities.

Why does this matter?

Well, if I have a VM that spans multiple Datastores then SnapProtect will discover and create SnapShots for all the datastores. So far so good.

However, if I then use SnapProtect to drive the SnapMirror replication, then it creates a unique SnapMirror destination volumes for each Sub Client. It doesn't recognise that there may already be a SnapMirror destination volume of the same name, and it wil create  vol_1, vol_2, vol_3, etc,  copies. This means that even with a relatively small number of VM's it's very easy to end up with multiple duplicates of SnapMirror destination volumes, which all contain copies of the same data, and the destination aggregate is quickly exhausted!

I find that the comnbination of NetApp VSC plugin & SnapMirror just works so much better that I'm prepared to sacrifice the catalogue advantages which CommVault offers. Is it just me?

bwood
4,829 Views

It is true that it will create a mirror for each subclient (as mentioned in your example).  This is why the layout is important.  If two groups of VMs need different backup schedules, retention, etc it would be best to have them in separate datastores.  That way the storage policies would effectively only mirror the data associated with each subclient. 

l_mitchell
4,827 Views

Hi all, hope you can help with my understanding here, not that I'm in the middle of deploying it or anything never seeing the product before!

If I want hourly snapshots kept for 48 hrs for example I have to create a seperate storage policy / backupset, OK I get that. It would be nice if I could do something in the extended retention rules for hourly and dailys and not just weeklys / monthlys / yearlys but its a Commvault product out of your control.

Anyway so then if I want two weeks worth of dailys kept I create another storage policy / backup set and associate the same volumes to the subclient for backup. Correct me if I'm wrong here! Still all on the same local volume.

OK so now ideally I want hourly snapmirrors then a daily snapvault. If I use my daily storage policy / backupset for an hourly snapmirror it doesn't work as from Commvault it sees the snaps as unique, and that there aren't other snapshots on the volume. So I have to do a daily snapshot followed by a daily snapmirror followed by a daily snapvault.

And if I use the hourly snap storage policy it'll create a seperate snapmirror, so thats out of the question.

Some additional info. I have a NCommServe backing up NetApp CIFS shares, and a proxy sever setup for VMware, using datastore affinities for the backup selection. All

OK so the stage is set....!

The main question am I doing this right, does this sound like the right way to go about things?

Can I use snapmirror.conf from the NetApp side to keep the snapmirror relationship thats understood by the SnapProtect daily storage policy to update the relationship hourly without breaking anything inside snapprotect? So I will then get my hourly snaps created on the primary across to DR. I'm not bothered about Snapprotect knowing about those on the DR snapmirror.

If I protect multipe volumes (not qtrees) with a snapmirror aux copy to DR followed by a snapvault of the mirror where is the naming convention controlled? DFM? Is it the datasets I created? Not tried playing around with this bit yet, however if someone has an answer rather than trial and error that would be my preferred option. Any best practices on what you do to control snapvault destination names. Basically I'll have multiple volumes going into a single snapvault destination (fan in? see I am paying attention!) In all the demos it looks to be the sim where you only have a single controller, what I have done is rather than saying secondary / tertiary. I have called them  drfiler1_aggr0_50tb / drfiler2_aggr0_50tb, and have the snapmirrors for filer1 going to drfiler1 and filer2's snapmirrors going to drfiler2 in case there ever was a DR situation (probably never but hey) theres some load balancing of the resources of the controllers. I then have set the snapvaults to go to the alternate drfiler, which is probably slightly excessive, but then the snapvault is on a different aggr to its snapmirror for additional resiliency. I have plenty of space to play with.

Also I see the individual qtree protected and the volume protected as two seperate snapvault relationships. Is this normal, I'm not seeing data being copied twice per volume / unnecessary workload on the filer am I? Not played with Snapvault extensively enough to know what I should be seeing, as normally you protect qtrees, but I'm not specifying what qtree to backup inside snapprotect.......or can I / should I be doing so???

Whats the best way to schedule a snapshot / snapmirror / snapvault job. Ideally I'd like it to just do one after each other as the previous one finishes. If this is an option and I'm just missing on where it is please let me know. Or is it just a case of scheduling each piece of the job manually and hoping they dont overlap. Again schedule policies baffle me a bit, should I be using them, it seems I need schedules to create a schedule policy? Also can I only kick off a primary snap by scheduling each subclient? It seems that way, the documentation seems to do it this way but doesn't say that is the case?

If backing up VMware (with the Primary snap) the Enable Granular Recovery option....I thought by unchecking that it would avoid the excessive flexcloning and mounting of VMs inside VMware but oooh no! Is what I'm seeing inside VMware some sort of basic verify? What is the granular recovery actually doing, is it indexing the vmdk once its mounted and been tested? What is the overhead for this? Can I create an hourly job that doesnt do this (dont say configure NAS snapshots!). Also what is the other option of Create Backup Copy Immediately, with Enable Granular Recovery for backup copy.......is this to do with backup to tape or something?

SQL in a VM best practices. I'm struggling to understand how the SQL bit works, not being much of an SQL guy. Anyway would you present RDMs (either vmware pRDMs or via vm guest iscsi maybe?) and treat it as a physical host, or do you leave it inside the vmdk and some sort of other magic happens to backup the DB? Do I need another proxy for this? I'd be very interested to understand how people have gone about this. Also the SQL server I'm talking about is currently a physical cluster with netapp luns, and as far as I can see I'll need a proxy anyway right? Something I am not looking forward to anyway!

Exchange seems to be simple enough, my scenario I have exch2007 and two mail stores (no CCR or anything like that thank god). I am going to dump all exchange VMs into its own NFS volume and protect them all in a single policy. For things like the individual mail restore options work do I just install all the agents for exchange server / mining etc on each mail server and that should be ok? Any gotchas / real world best practices you can share?

I have been through all the NetApp university online courses / have the TR docs / POC cookbook and am trawling through the web / online documentation, but its very slow so far as its a pretty complex beast, so any comments and suggestions are most welcome. Just trying to find other peoples example storage policies for some sort of reference seem to be impossible, it would be nice to see some scenarios where a SnapManager product did it this way, this is how you'd do it in SnapProtect.

Thanks guys.

l_mitchell
4,828 Views

one other thing I've just noticed, is if you set a schedule to perform the snapmirror only, it created the snapvault copy also, so do I just do a generic auxilary schedule to backup all or should I specify the second aux copy (in my case the snapmirror) and it'll carry on through regardless? For now I'll leave both schedules on.

Just one more thing, I'm also playing with the local Netapp snap sched, as in if you had a couple of backup snapshots in going, as well as the backup netapp filer snapmirror.conf going every now and again in case there is ever an issue with the CommServe, this shouldn't be an issue should it? So far it seems to be working.

And its not datasets I created its resource pools, but I can't see from a DFM point of view how its choosing its naming convention, so it must be SnapProtect?

l_mitchell
4,828 Views

sorry to pepper the post, but I have answered one of my questions, I had Application Aware backup for Granular Recovery ticked in the subclient backup set. Without this all I see inside VMware are two snapshots per VM created then deleted. Not sure why two snapshots though? So I can have that unticked this option for my hourly snapshots for my general VM backups. I'm guessing this may be different when I come round to SQL / Exchange though.

bwood
5,942 Views

Win2008 VMs will do 2 snapshots (when disk.EnableUUID flag is set on the VM properties).

bwood
5,942 Views

Lots of good questions here and I think you would find it beneficial to line up a conference call with a NetApp expert to get clarity.

e_honcoop
5,943 Views

Psss… and post the conclusions of that call in this thread ☺

Met vriendelijke groet,

Eric Honcoop

Senior Engineer Back-end

bwood
5,942 Views

Ok let's try this...

Anyway so then if I want two weeks worth of dailys kept I create another storage policy / backup set and associate the same volumes to the subclient for backup. Correct me if I'm wrong here! Still all on the same local volume. <<< Yes, same volume because that's the primary data you are protecting... this is basically how you would do it today and you would have a set of snapshots for each subclient (dailies and weeklies).

OK so now ideally I want hourly snapmirrors then a daily snapvault. If I use my daily storage policy / backupset for an hourly snapmirror it doesn't work as from Commvault it sees the snaps as unique, and that there aren't other snapshots on the volume. So I have to do a daily snapshot followed by a daily snapmirror followed by a daily snapvault.  <<<  You could setup the daily snapshot + daily mirror + daily vault in SnapProtect and then modify the snapmirror.conf file to update the existing mirror relationship hourly.  You may want to schedule such that the daily mirror and the hourly mirror do not collide.

Can I use snapmirror.conf from the NetApp side to keep the snapmirror relationship thats understood by the SnapProtect daily storage policy to update the relationship hourly without breaking anything inside snapprotect? So I will then get my hourly snaps created on the primary across to DR. I'm not bothered about Snapprotect knowing about those on the DR snapmirror. <<< Yes, as mentioned above.

If I protect multipe volumes (not qtrees) with a snapmirror aux copy to DR followed by a snapvault of the mirror where is the naming convention controlled? DFM? Is it the datasets I created? Not tried playing around with this bit yet, however if someone has an answer rather than trial and error that would be my preferred option. Any best practices on what you do to control snapvault destination names.  <<< This is controlled under the covers by SnapProtect / DFM… I don't think you can tweak this.

Also I see the individual qtree protected and the volume protected as two seperate snapvault relationships. Is this normal, I'm not seeing data being copied twice per volume / unnecessary workload on the filer am I? Not played with Snapvault extensively enough to know what I should be seeing, as normally you protect qtrees, but I'm not specifying what qtree to backup inside snapprotect.......or can I / should I be doing so???   <<<  When you select the volume as the object to protect it will create a SnapVault relationship for each of the trees + a SnapVault relationship for the non-qtree data.  The non-qtree data picks up any files/dirs that do not live in a qtree in the volume.   So if you had 4 qtrees in the volume you would end up with 5 relationships.  This is normal.  On the other hand you could specifically select the 4 qtrees then you would only have 4 relationships. 

Whats the best way to schedule a snapshot / snapmirror / snapvault job. <<< 2 schedule policies.  NAS example…  One for the primary snapshot (schedule policy type = Data Protection, agent type = NAS NDMP).   Associate with the NAS subclients.    One for the replication (schedule policy type = Auxiliary Copy).  Associate to the storage policy.  By associating to the entire storage policy it will perform the mirror --> vault cascade as a single operation (mirror first, then vault).  You mentioned the POC cookbook… this is covered in the appendix.

it seems I need schedules to create a schedule policy? <<< you create a schedule policy and add schedules to it.

Also can I only kick off a primary snap by scheduling each subclient?   <<< you could schedule each subclient,  or schedule at the backupset level, or create a schedule policy and associate various subclients from multiple clients (of same type)… there are a few ways to do it.

If backing up VMware (with the Primary snap) the Enable Granular Recovery option....I thought by unchecking that it would avoid the excessive flexcloning and mounting of VMs inside VMware but oooh no! Is what I'm seeing inside VMware some sort of basic verify? What is the granular recovery actually doing, is it indexing the vmdk once its mounted and been tested? What is the overhead for this?   <<< The "enable granular recovery" option enables file level indexing (Windows VMs) during the backup.  Unchecking this option will bypass the file level indexing during backup.  You can still do the granular recovery, but it will need to build a temp index at restore time.  So, index up front or do it at restore time.  Without this enabled you lose any ability to do a wildcard search.   Now what you are probably seeing (flexclone, etc) with it unchecked is due to it having to do a basic indexing to identify the VMDKs and config files associated with the VM.  There is no way to avoid this.

Also what is the other option of Create Backup Copy Immediately, with Enable Granular Recovery for backup copy.......is this to do with backup to tape or something?    <<< Tape… if you have tape configured in the storage policy then "create backup copy immediately" would start the tape copy as soon as the initial snapshot backup completes. 

SQL in a VM best practices. I'm struggling to understand how the SQL bit works, not being much of an SQL guy. Anyway would you present RDMs (either vmware pRDMs or via vm guest iscsi maybe?) and treat it as a physical host, or do you leave it inside the vmdk and some sort of other magic happens to backup the DB?   <<<  If you want to use the SQL agent to perform backup and recovery of the databases then you would put the database in an RDM (iSCSI) and treat it as a physical host.    In a VMDK you can't use the SQL agent.  You would protect the VM (Virtual Server Agent) and use the "Application aware backup for granular recovery" option in the subclient.  However, your recovery in this case would basically be to recover the VM (just that the database was quiesced during the backup).

Do I need another proxy for this?   <<< You can.  That would offload the indexing phase of the backup.

Also the SQL server I'm talking about is currently a physical cluster with netapp luns, and as far as I can see I'll need a proxy anyway right?  <<<  That's right.  According to the docs "When performing SnapProtect backup for a Windows Cluster, a proxy server must be used for performing backup and restore operations."

Exchange seems to be simple enough, my scenario I have exch2007 and two mail stores (no CCR or anything like that thank god). I am going to dump all exchange VMs into its own NFS volume and protect them all in a single policy. For things like the individual mail restore options work do I just install all the agents for exchange server / mining etc on each mail server and that should be ok?   <<<  Exchange is on VMDK or iSCSI RDM?   Similar to SQL, if you want to use the Exchange Database agent then it would need to be iSCSI RDM.  This protects / restores at the database level.  For object level recovery you would need to look at either the Offline Mining Tool or Snap Mining.    If it's on VMDKs then you'll have to protect using the Virtual Server Agent.  This is just like SQL above, but you can truncate the Exchange logs.  To understand Snap Mining take a look at https://fieldportal.netapp.com/DirectLink.aspx?documentID=68032&contentID=73206.   The SnapProtect docs cover this also.

Public