ONTAP Discussions

Protection Manager without Provisioning Manager?

moechnig
3,057 Views

My customer is interested in the policy-based backup management capability of Protection Manager,  but they would prefer to avoid re-engineering their provisioning and management processes to accommodate Provisioning Manager.  Has anyone used Protection Manager in an environment that does not use the Provisioning Manager paradigm for provisioning and management?  What worked well and what was difficult?

Thanks for your thoughts!

5 REPLIES 5

sinhaa
3,057 Views

Hello James,

       Your question is very generic. It would be easier to answer if you can provide some information about the environment of the customer, their use-cases and what they are looking for in protection mgr. Without knowing more specific details, its tough to say what works well and what's difficult. It all depends on what they need.

warm regards,

Abhishek

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

moechnig
3,057 Views

Thanks for your response,  Abhishek.

The customer's NetApp estate is split into two environments.  In the field site environment, there are about 100 remote offices.  Each has a 20x0 class system (some single, some clustered) with a standard set of volumes and qtrees.  They are used for general-purpose file services, CIFS only.  All data is presented via one or more vfilers. The set of qtrees is identical among all the sites.  Field sites are not thin provisioned or deduplicated by policy, but thin provisioning is sometimes used to span capacity gaps while an upgrade is being implemented.  The qtrees on each field vFiler are backed up daily via Qtree SnapMirror to one volume per vfiler on a pair of 6030s with SATA disk in a central location.  The 6030s are thin provisioned and deduplicated, and retain snapshots for 20 weeks.  Tivoli Storage Manager backs up the 6030s to tape monthly, and retains those backups for 10 years.  The environment was originally designed to support vFiler DR, but the most recent DR activities have involved transferring data to a datacenter filer and recovering the field vFiler there, due to questions about the performance and reliability of the 6030s which are heavily loaded and non-redundant.  Future DR scenarios are likely to follow the same process.

There are two problems with the above configuration, which we would like to solve:

1.  TSM is not able to consistently create monthly backups.  We have proposed to replace it by retaining monthly snapshots on the 6030s and their successors for 10 years, and replicating that data via Volume SnapMirror to a third tier of backup storage in a different site for additional redundancy.  

2.  The field site backup process is currently inconsistently implemented.  The customer occasionally conducts a manual audit to ensure that each field vFiler is being backed up via QSM, and that each volume on the 6030 is backed up by TSM, but this is laborious and does not provide timely notice when a configuration change results in something falling out of the rotation.  Protection Manager's automation capabilities would be desirable here.

The second environment is datacenter NAS.  The customer is essentially providing SaaS to departmental and business unit users.  Most data is in vFilers, some legacy systems still present data directly.  Each qtree may be exported via NFS, shared via CIFS, presented as an FTP site, or any combination of these.  Qtree styles are a mix of NTFS and Unix depending on end-customer requirements.  Some primary volumes are thin provisioned, and dedup is used occasionally.  Storage includes a mix of FC, SAS, SATA, and HDS USP-V behind V-series.  The service level for local snapshots is 28 dailies and 4 intra-day snapshots, but some volumes do not have adequate snap reserve to support this.  Backup stratigies vary; some systems use TSM and NDMP directly to tape via FC; others use TSM and NDMP via IP; and still others are snapmirrored to the 6030s mentioned above and backed up to TSM via NDMP over IP from there.  Some datacenter data needs only 20 weeklies of retention, but other needs 10 years of monthlies, and these data types can sometimes be found on the same volume.  The customer's intent is to apply 10 year retention to everything, because they do not have a good way of knowing which data requires that level of retention and which does not.  In any case, qtrees are frequently added and removed, so backups are implemented at the volume level to minimize management overhead and opportunity for error.

Problems with the current data center approach include:

3.  The backup implementation is inconsistent, leading to excessive surprises, errors, and difficulty providing consistent access to backups for end-users.

4.  TSM backups are not recoverable by end-users due to security limitations in TSM, requiring instead a laborious process of delegating recovery to a helpdesk.

5.  TSM NDMP backups often fail, leaving no option for recovering NAS data beyond 28 local daily snapshots.

We have proposed to resolve issues 3 - 5 by implementing volume-level SnapVault across their datacenter NAS estate, leveraging the same secondary and terciary system architecture as for the field site backups.  This would allow backups to continue being managed at the volume level, while allowing greater snapshot retention depth on storage accessable via easy-to-use NAS interfaces.

Thanks again for your thoughts.

Jim

adaikkap
3,058 Views

Hi

     By  "Protection Manager without Provisioning Manager" do you mean the primary provisioning ? or the secondary provisioning ?

Protection manager works either ways with or without Provisioning Manager.As said by pete, if you allow Protection Manger to create its own secondary volume,(we call it secondary provisioning)

then you dont have the hassle of resizing the destination volumes when primary volumes are resized, there by reducing the number of backup failures.

ProtMgr when it resizes takes into account, whether the volume is dedupe enabled, if so whats the filer model and whats the ontap version and what's the supported dedupe max volume limit.

With all being said, if I read yours post correctly, you want to do Whole Volume Snapvault ? i.e. Entire Source volume into a qtree in the secondary volume ?

like filer1:/src_vol---SV--->filer2:/dst_vol/src_vol_qt ?

If so Prot Mgr doesnt support Whole volume support, we neither create them or discover them.

Prot Mgr only creates or discovers qtree to qtee snapvault like this.

filer1:/src_vol/qt1---SV--->filer2:/dst_vol/qt1

Regards

adai

smoot
3,057 Views

"Easy" and "difficult" are hard to define.

Protection Manager was designed and implemented before Provisioning Manager existed, so ProtMgr was intended to work well even if you provisioned your own primary storage. We even thought we made it easy for you to provision your own secondary storage and many customers find it useful to create and import all the storage and the replication relationships.

There are many things that, IMHO, work much better if you allow us to provision the secondary storage. Protection Manager has many picky checks on things like volume sizes and options for secondary storage that tend to trip people up at first. It's not hard to get right but creating your first dataset will take a few tries. We share quite a bit of code between Protection and Provisioning Manager when it comes to provisioning secondary storage so I'm not sure whether you'd call that "using Provisioning Manager" or not.

-- Pete

moechnig
3,057 Views

Thanks, Pete.  That was pretty much exactly the kind of answer I was looking for.

Most of the things I've read and heard about Prov Mgr and Prot Mgr lately give me the impression that they're attached at the hip.  Since all our return would come from Protection Manager, I'm glad to know that it is intended to work without completely adopting Provisioning Manager for primary storage first.

For those of you who are looking for additional details:

The customer is currently running DFM 4.0.  They are using DFM with some scripting to monitor SnapMirror jobs now, but this is unsatisfactory because coverage does not automatically extend itself as systems are added.  DFM also seems to be a bit pickey about what SnapMirror relationships it will import after a migration or system lifecycle event.  DFM runs on a single Windows server.

Jim

Public