Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
Running OnCommand Unified Manager 5.1 on Windows 2008 R2. I have two ONTAP 8.1.1 7-mode hosts configured. Each host is used to create a different resoruce pool.
1) I have established an OSSV relationship using a "Remote Backup" policy to one resource pool. The provisioning policy enables on-demand Deduplication. Everything works great here.
2) I copy & modify a "Backup, then Mirror" protection policy. The primary data node is set to no-schedule since the primary is OSSV. The backup schedule and retention is the exact same as the "Remote Backup" policy. Finally, the "Backup to Mirror" schedule is once a night.
3) When I apply the custom "Backup, then Mirror" policy to my existing dataset, it passes all the Conformance Engine checks. It auto-provisions the mirror volume from the other resource pool and attempts to establish the mirror relationship. However, it always fails with the message, "destination volume too small; it must be equal or larger than the source volume." Why does this step fail?
I did a volume status -c on both the SnapVault secondary vol and the mirror vol, and they're both block checksums. I wasn't able to do a vol status -b on the mirror volume because its immediately restricted, and then gets deleted when PM rolls-back the mirror.
Any ideas on why the mirror portion of this dataset fails? I swear I have done this before with older versions of ONTAP and/or DFM.
Thanks.
Reid
Second test. I created a brand-new dataset with an OSSV client path as the primary physical member. The OSSV path only has about 32 KB of data in it. I then attempt to assign the custom "Backup, then mirror" policy to my dataset. I'm seeing something very strange in the Conformance Engine check:
My pmOSSVDirSecondaryVolSizeMb setting is set to 250m. So, I'd expect my OSSV secondary volume to be 250 MB, but its 9.67 GB ?!?!?!??!!? What the what?
My Mirror volume should be based off the size of the primary vol, which is 9.67 GB. But its being sized at 250 MB. ????!?!?!?!?
What is going on here? It seems as if Protection manager is confused. I know I am.
Thanks.
Reid
Third test. I DISABLED dpDynamicSecondarySizing. I then attempted the same test as my second test.
This relationship was successful. Both the OSSV backup and the mirror. I have no idea why the volumes were provisioned so large or why this didn't work when dpDynamicSecondarySizing was enabled.
Any ideas????
Hi Reid,
AFAIK PM cannot figure out the actual size of the data upfront. That's why it always creates the destination volume as large as the primary partition, which would be save for the worst case - you put all your data in that path until the disk/partition is full. I'd assume the disk in your OSSV client has ~10GB?
And it gets worse. If you want to back-up two different paths from the same partition, PM adds up the potential capacity of both paths, which would make the secondary volume ~20GB, although there physically cannot be more than 10GB of data in that partition.
I, too, could never figure out how to limit PM in provisioning the OSSV secondary volume size and I, too am curious how to achieve that.
regards, Niels
Niels,
The pmOSSVDirSecondaryVolSizeMb setting is explicitly designed to do exactly what it sounds like. When provisioning secondary volumes for OSSV, PM has no idea how big the client's backup data set is. So it has to provision a volume that it thinks is big enough to house the client's data. The default value for this variable has always been 10 GB. However, when using ONTAP simulators that only have a 9 GB aggregate, this would always cause my OSSV jobs to fail the conformance test. Ever since I learned this, I have set pmOSSVDirSecondaryVolSizeMb to 250 so it will work with my ONTAP simulators and it worked perfectly.
OC 5.1 for 7-mode doesn't seem to be honoring this setting anymore. Even using the canned "Remote Backups only" policy - designed specifically for OSSV - it doesn't honor my pmOSSVDirSecondaryVolSizeMb setting. Not sure why....
Hi Niels & Reid,
I am sure my earlier post answered, what pmOSSVDirSecondaryVolSizeMb meant and the behavior you are seeing is expected. The second test where the mirror ended up creating 250mb volume is wrong and looks something is not working as designed.
Regards
adai
Hi Reid,
- The planned size of the auto-provisioned SnapVault Secondary volume is 9.67 GB. I still can't figure out why this is so large. Should be 250 MB.
I already explained this in my earlier post, as to why its 9.67 and not 250MB.
2.The planned size of the auto-provisioned SnapMirror Secondary volume is 9.67 GB. At least its large enough to make the mirror work!
This is the usual VSM destination behavior without DSS.
Regards
adai
Hi Reid,
First let me explain what this option means and how it is used. pmOSSVDirSecondaryVolSizeMb.
In the 3.7 release of DFM, secondary volume sizes were fixed at either the size of the containing aggregate, or the size specified by the global, the option pmAutomaticSecondaryVolMaxSizeMb.
Neither of these fixed sizes that had any direct relationship to how much data might be stored in the secondary volume.
Since the Total Size of the secondary volume could not be used to determine how much space should be reserved on its aggregate for data
( as we were using aggr sized none guaranteed volume, and needed size for overcommitment calculations), a proxy called Projected Space was created for this purpose.
For QSM and SV the projected size is 1.32x source volume total size if used space is < 60% and 2.2x source volumes used size if used space is > 60%.
My pmOSSVDirSecondaryVolSizeMb setting is set to 250m. So, I'd expect my OSSV secondary volume to be 250 MB, but its 9.67 GB ?!?!?!??!!? What the what?
For OSSV the projected size is instead calculated using the static option pmOSSVDirSecondaryVolSizeMb which by default has a value of 10G. So before provisioning a OSSV destination volume PM looks for a least of 10G free space to be available on the aggr
without exceeding any of the aggrfullness or overcommitment thresholds and provisions an aggr sized none guaranteed volume.
So hope now you understand why PM still created 9.67 G and not 250MB.
My Mirror volume should be based off the size of the primary vol, which is 9.67 GB. But its being sized at 250 MB. ????!?!?!?!?
This even I am confused. The mirror volume should have be the size of source volume ie 9.67G and not 250 MB. Smell something fish.
Regards
adai
Hi Earls,
Second test. I created a brand-new dataset with an OSSV client path as the primary physical member. The OSSV path only has about 32 KB of data in it. I then attempt to assign the custom "Backup, then mirror" policy to my dataset. I'm seeing something very strange in the Conformance Engine check:
- The planned size of the auto-provisioned Snapvault Secondary volume is 9.67 GB.
- The planned size of the auto-provisioned Mirror volume is 250 MB.
My pmOSSVDirSecondaryVolSizeMb setting is set to 250m. So, I'd expect my OSSV secondary volume to be 250 MB, but its 9.67 GB ?!?!?!??!!? What the what?
My Mirror volume should be based off the size of the primary vol, which is 9.67 GB. But its being sized at 250 MB. ????!?!?!?!?
I think I figured why the Mirror destination got created with 250mb in this case and not 9.67 GB. The Dynamic Secondary Sizing for VSM is using the projected size of the volume to create the secondary and looks like not the actual volume size returned by ONTAP.
In this case the 250MB comes from the option that you have set for pmOSSVDirSecondaryVolSizeMb.
Also in you first case why the Mirror failed becasue the default value for pmOSSVDirSecondaryVolSizeMb is 10GB where as the size of your aggr was only 9.67 Gb so it failed.
Regards
adai
Hi Reid,
Let me answer to each of your post so that things are clear.
3) When I apply the custom "Backup, then Mirror" policy to my existing dataset, it passes all the Conformance Engine checks. It auto-provisions the mirror volume from the other resource pool and attempts to establish the mirror relationship. However, it always fails with the message, "destination volume too small; it must be equal or larger than the source volume." Why does this step fail?
This error as you know is a ONTAP message which the VSM destination size is smaller than source. To really find the problem can you tell me what was the size of the VSM source/OSSV Destination volume ?
For OSSV destination volume provisioning, PM from its first release of 3.5 does the following untill now 5.1 aggr sized none guaranteed volume.
What DSS enabled for Mirror in this case ?
Regards
adai
Hi Adai,
I've the same issue. We introduced our 3th tier today. I adjusted one of our datasets to have 1st tier --- backup --> 2th tier --- mirror --> 3th tier. (bold is the new part). Attached a resource pool as destination. Result :
11:13 CEST [RMGNAB3:replication.dst.err:error]: SnapMirror: destination transfer from RMGNAB1.rmg.be:RMGDMZLAMP01_Backup_backup to RMGDMZLAMP01_Backup_mirror : destination volume too small; it must be equal to or larger than the source volume.
I've checked the sizes : source 8.9GB, destination 8.83GB. That is indeed smaller, but why ? It's OnCommand that provisions the volume !!
Regards,
Geert
Hi Geert,
Engg is actively working on this and have found some root causes as well. To reconfirm the same and also to understand, that you are also impacted by the same problem can you help us answer the questions below ?
Regards
adai