Active IQ Unified Manager Discussions

DFM Protection Manager - Destination Volume Provisioning and Qtree Maximums

ASUNDSTROM
7,070 Views

Qtree Maximums:

We have 75 qtrees in the volume that I'm trying to add to a DataSet that has a standard Backup Policy assigned to it.  I don't have a resource pool assigned to the Dataset, because I would like to specify\create the volumes that the qtrees from the SV Relationships are created in, on the destination filer. So before adding the primary volume to the Dataset, I created a "backup" volume on the destination filer and added it to the physical resources of the Backup Node.

We get the following error in NA Management Console 3.1 after trying to import a Primary volume with 75 Qtrees:

    "The volume relationship count of 50 is larger than the relationship limit of 50."

Notice the count of qtrees it percieves is wrong as well.

In reading I found the following link: https://communities.netapp.com/thread/20394  It describes how to set the dfm option pmMaxSvRelsPerSecondaryVol to something other than its default of 50.  We have Core Services 5.0.1.7864 installed on a Windows 2008 server, which doesn't allow for the grep of the hidden options.  I had to set the option to 51, then back to 50, just for it to appear in the dfm option list command.  That probably wasn't the best idea, since I can't find any documentation for what the setting should be, other then the post above.

Does anyone have any documentation on the parameter pmMaxSvRelsPerSecondaryVol? Why is it set to 50?  Is it due to the DFM Server's limitation (physical resources on the server) or something else? Are there any drawbacks to changing it to a higher number to allow for volumes containing a larger number of qtrees?

Volume Provisioning Requirements:

The second part of this thread is a question about how DFM comes up with the destination volume's size requirement.  We intend to keep the same backup sets (retention and frequency) for the primary as the secondary.  The volumes should be able to be the same size, but the DFM seems to error unless the Backup volume provisioned is roughly 1.3x the size of the primary volume to be backed up.  Is this a setting that can be changed?  What is the calculation for the destination volume size based off of?

17 REPLIES 17

ASUNDSTROM
7,038 Views

I found the Qtree Maximums portion of the answer on this thread:

     https://communities.netapp.com/thread/21392  ~thanks to Adaikkappan Arumugam explanation

"The down side is that, max concurrent SV stream per controller is limited and various with the following. ONTAP Version FAS Model NearStore License being enabled or not. The regular scheduled updates of this volume, will consume all SV threads until its finished and can increase the back window and delay  snapshot creation on the secondary as alll 1000 needs to be snapvaulted before a SV snapshot can be created on the destination. This is the only downside I can think of. This limit for 50 was done mainly for QSM as each qtree in a QSM needs a base snapshot and only remaining 205 would be available for long term retention as max snapshots per volume is only 255. Also do remember the options you are changing is a global option and applies to all dataset creating SV relationship."

adaikkap
7,036 Views

Hi,

Volume Provisioning Requirements:

The second part of this thread is a question about how DFM comes up with the destination volume's size requirement.  We intend to keep the same backup sets (retention and frequency) for the primary as the secondary.  The volumes should be able to be the same size, but the DFM seems to error unless the Backup volume provisioned is roughly 1.3x the size of the primary volume to be backed up.  Is this a setting that can be changed?  What is the calculation for the destination volume size based off of?

Yes by default PM looks for a secondary volume that acts as a destination for SV or QSM relationship to be 1.32x the source volume size.

Rule of thumb

If Volume used < 60% then1.32x source volume total size

If Volume used > 60% then 2.2x source volume used size

This is done to support longterm retention and some lun cases to accommodate fractional reserve

IMHO, assigning a physical resouce is not a good idea for the following reasons.

1.its can be dynamically sized( both grown and shrunk) when source increases or decreases.

2.Also lot of checks like volume langugae, inode count needs to pass for that volume to be eligible for Sv destination.

I strongly recommend you to use a resource pool in your secondary/SV destination of your dataset and take advantage of PM features.

Regards

adai

ASUNDSTROM
7,036 Views

Thanks Adai!

Is there any way to change the default settings for the secondary volume provisioning?  Like I said we are provisioning the volumes on the source taking into account the backups and longterm retention.  Therefore when PM does it we are defining space on the secondary that won't be utilized and in some cases this creates a volume larger than the 16TB Dedupe limit, which is a drawback for us.

I appreciate your recommendations, but currently we are wanting more control of the environment.  Possibly once we have a full handle on PM and trust its abilities we will move to a more dynamic policy driven architecture.

adaikkap
7,036 Views

But I dont know if you are aware of this. The volume provisioned by PM are none guarantee and wouldn't take up space from the aggr right away. Either way i would leave it to you.

Regards

adai.

ASUNDSTROM
7,039 Views

Where does the Deduplication limitation come in then?  Is the volume deduplication limit of 16TB based on used space or the volume's set size?

Thanks Again.

adaikkap
7,039 Views

The 16TB is based on each ONTAP limits depending upon a given Ontap version and FAS model.

Regards

adai

ASUNDSTROM
7,037 Views

Adai,

Do you have an answer on whether or not the percentage is configurable?

adaikkap
7,038 Views

I am not clear on what percentage that you are asking about.

Regards

adai

ASUNDSTROM
7,038 Views
Yes by default PM looks for a secondary volume that acts as a destination for SV or QSM relationship to be 1.32x the source volume size. Rule of thumb 
      If Volume used < 60% then1.32x source volume total size
      If Volume used > 60% then 2.2x source volume used size
This is done to support longterm retention and some lun cases to accommodate fractional reserve

Can these percentages\multipliers be customized?

adaikkap
6,562 Views

Hi

   Yes it can be, but one needs to understand the nuances before changing them and they are global options and affect all secondary volume provisioning.

Regards

adai

ASUNDSTROM
6,562 Views

I'm sorry Adai, I just saw your response. Could you tell me what the options are and how to change them?

For our environment we are scheduling the same snapshot schedule on the primary as we are on the secondary, as well as keeping the same retention periods for the data.  So the volume sizes should really be the same.

Thanks,

Abe

adaikkap
6,563 Views

Hi Abe,

     These are hidden options, and I can't share them in a public community. Pls raise a case with NetApp Support to get the options and values to tweak.

Regards

adai

ASUNDSTROM
6,563 Views

Thank you Adai.  You have been most informative.  I'll submit the ticket as needed. Have a great weekend.

ASUNDSTROM
6,561 Views

Adai\NetApp Community,

Recently as I was configuring our backup policies using Host Services for our VMware Environment, I added a 1 TB datastore to the Backup Policy that I had configured.  This policy is configured to run a SnapVault to a Secondary Filer, then SnapMirror to a Tertiary Filer.

Primary Volume = 1TB -> Provisioning 1.32TB on Secondary -> Provisioning 24TB on the Tertiary Filer

Primary Retention = 3 hourlies and 1 daily retained for 3 days

Secondary Retention = 2 weeks of Hourlies, 4 weeks of Dailies

Tertiary Retention = Mirror Relationship, so same as Secondary

Why in the world would DFM want to provision a 24 TB Volume on the SnapMirror Destination?

kryan
6,561 Views

Which version of UM are you using? 

In any version prior to 5.1, Dynamic Secondary Sizing was not enabled for VSM destinations so those volumes would be provisioned to the size of the containing aggregate.

ASUNDSTROM
5,123 Views

thanks kryan.  We are using 5.0.2.  Currently 5.1 is FCS, so I'm not sure that we would want to move to it just yet. Is there any way around this, or do I just have to let the provisioning take place and then shrink the volume before the first snapmirror kicks off.

adaikkap
5,123 Views

Hi,

     Shrink after provisioning may not help as it some of the meta data doesn't shrink.

If there are some feature that you need and available in a FCS version shouldn't prevent you from adopt that same as long as you have done your internal evaluation/lab runs.

Regards

adai.

Public