Hi Jordan, To add to what pavan said, in case of 7-Mode systems, the collection interval, retention and the counter to collect a customizable. None of these can be done for ClusterMode. The section 14 Datacollection of TR4090 gives details of how collection can be configured. http://media.netapp.com/documents/tr-4090.pdf Regards adai
... View more
Hi Bill, Is there DNS resolution between your OCUM server and the browser from which you are launching the Reports ? Else Pls add a host entry in your OCUM server you should be fine. Let me know if this solves. Regards adai
... View more
Hi Niels & Reid, I am sure my earlier post answered, what pmOSSVDirSecondaryVolSizeMb meant and the behavior you are seeing is expected. The second test where the mirror ended up creating 250mb volume is wrong and looks something is not working as designed. Regards adai
... View more
Hi Reid, The planned size of the auto-provisioned SnapVault Secondary volume is 9.67 GB. I still can't figure out why this is so large. Should be 250 MB. I already explained this in my earlier post, as to why its 9.67 and not 250MB. 2.The planned size of the auto-provisioned SnapMirror Secondary volume is 9.67 GB. At least its large enough to make the mirror work! This is the usual VSM destination behavior without DSS. Regards adai
... View more
Hi Reid, First let me explain what this option means and how it is used. pmOSSVDirSecondaryVolSizeMb. In the 3.7 release of DFM, secondary volume sizes were fixed at either the size of the containing aggregate, or the size specified by the global, the option pmAutomaticSecondaryVolMaxSizeMb. Neither of these fixed sizes that had any direct relationship to how much data might be stored in the secondary volume. Since the Total Size of the secondary volume could not be used to determine how much space should be reserved on its aggregate for data ( as we were using aggr sized none guaranteed volume, and needed size for overcommitment calculations), a proxy called Projected Space was created for this purpose. For QSM and SV the projected size is 1.32x source volume total size if used space is < 60% and 2.2x source volumes used size if used space is > 60%. My pmOSSVDirSecondaryVolSizeMb setting is set to 250m. So, I'd expect my OSSV secondary volume to be 250 MB, but its 9.67 GB ?!?!?!??!!? What the what? For OSSV the projected size is instead calculated using the static option pmOSSVDirSecondaryVolSizeMb which by default has a value of 10G. So before provisioning a OSSV destination volume PM looks for a least of 10G free space to be available on the aggr without exceeding any of the aggrfullness or overcommitment thresholds and provisions an aggr sized none guaranteed volume. So hope now you understand why PM still created 9.67 G and not 250MB. My Mirror volume should be based off the size of the primary vol, which is 9.67 GB. But its being sized at 250 MB. ????!?!?!?!? This even I am confused. The mirror volume should have be the size of source volume ie 9.67G and not 250 MB. Smell something fish. Regards adai
... View more
Hi Reid, Let me answer to each of your post so that things are clear. 3) When I apply the custom "Backup, then Mirror" policy to my existing dataset, it passes all the Conformance Engine checks. It auto-provisions the mirror volume from the other resource pool and attempts to establish the mirror relationship. However, it always fails with the message, "destination volume too small; it must be equal or larger than the source volume." Why does this step fail? This error as you know is a ONTAP message which the VSM destination size is smaller than source. To really find the problem can you tell me what was the size of the VSM source/OSSV Destination volume ? For OSSV destination volume provisioning, PM from its first release of 3.5 does the following untill now 5.1 aggr sized none guaranteed volume. What DSS enabled for Mirror in this case ? Regards adai
... View more
Hi Richard, To answer your first question, there is no way to change the autoSnapShot delete settings for SAN. The things that immediately come to my mind are the following difference, In Case of SAN volumes, the scheduled snapshots on the volume are disabled. The volume options are specific to SAN There is no snap reserve. Regards adai
... View more
Hi Michael, Unfortunately, the inconvenience you face is a design and expected behaviour. SME/SMSQL are windows based snapmanager and when they integrate with protection manger, their retention and its schedules from primary to secondary as well as primary are controller by Snap Manager and not Protection Manager. Where as in case of Unix based snapmanagers like SMO/SMSAP, only the primary snapshot schedule and retention's is controlled by snapmanagers. Retention and updated schedules can be delegated to PM or controlled by SnapManager as well. Regards adai
... View more
Hi Jimmy, Unfortunately, there is no dataset related views. The current set of views dont have any dataset related information/Columns in them. The only left out options are to use the cli or the NMSDK. The dfpm cli supports perl outputs as well. Regards adai
... View more
There is no canned report. But you should be able to script and create a report with the two cli dfpm dataset list -m <ds name or id> dfpm relationship list Regards adai
... View more
Hi Richard, OCUM 5.1 release has an options to disable the file space reservation to none for NAS provisioning policy both at dataset and global level. Here is the link to the same from Release notes of version 5.1 https://library.netapp.com/ecmdocs/ECMP1120082/html/GUID-3E8AB1F0-FF4F-4991-AE37-BEBB704BF614.html Copy Pasting the same for convenience dfpm dataset set With the isNoneGuaranteedProvision option, enables thin provisioning in a specified dataset of NAS volumes, with no space guarantee With the isSkipAutosizeConformance option, skips the dataset conformance check on the Autosize option in a volume member of a specified dataset, and enables thin provisioning in a specified dataset of NAS volumes when theisNoneGuaranteedProvision option is also enabled dfm option set With the isNoneGuaranteedProvision option, enables thin provisioning in all datasets of NAS volumes with no space guarantee With the isSkipAutosizeConformance option, skips the dataset conformance check on the Autosize option in a volume and enables thin provisioning in all datasets of NAS volumes when the isNoneGuaranteedProvision option is also enabled Regards adai
... View more
Hi Christophe, This procedure is for migrating from 7-Mode to 7-Mode. But yes there will definitely be a procedure to migrate volumes from 7 to C-Mode, but I don't have that. Regards adai
... View more
Hi Matthew, If there is an equivalent counter in the counter manager, then you can make a graphical view using custom view of Performance Advisor. Regards adai
... View more
Hi Christophe, Here is the procedure that we use. High Level Steps/Flow Step Comments 1 dfm options set dpReaperCleanupMode=Never. This is to make sure that during the migration conformance/reaper doesn’t reap any relationship 2 Validate no active jobs and relationships are idle, and suspend dataset 3 Relinquish all relationships using DFPM. 4 Remove all physical resources from the dataset (if the dataset contains multiple relationships, then only remove ones that need migration). 5 Relinquish the primary qtrees from the dataset using DFBM. 6 Migrate the data using technique 1 from NetApp KB 1011499 if you want to migrate the entire volume. Use technique 2 from the same kb if you only want to migrate a qtrees. In this doc the example used is of technique 1 for entire primary volume migration. But irrespective of technique 1 or 2 steps in PM remain the same. 7 Re-discover the storage controllers in DFM. 8 Add the physical resources back to the dfm using the DFBM. 9 Import the new relationships into the dataset. 10 Resume the dataset and test by doing an on-demand backup job 11 Revert the option back to orphans dfm options set dpReaperCleanupMode=Orphans Detailed Steps Current Environment: Source mpo-vsim14 Snap Vault Source Destination mpo-vsim15 Snap Vault Destination New Source mpo-vsim16 New SnapVault Source This is how relationship looks like in Prot Mgr before we start. C:\>dfpm dataset list -m UserHome Id Node Name Dataset Id Dataset Name Member Type Name ---------- -------------------- ---------- -------------------- -------------------------------------------------- --------------------------- 1347 Primary data 1344 UserHome volume mpo-vsim14:/UserHome 1367 Backup 1344 UserHome volume mpo-vsim15:/UserHome C:\> dfpm dataset list -R UserHome Id Name Protection Policy Relationship Id State Status Hours Source Destination ---- --------- ------------------ -------------- ------------ ------- ----- ---------------------------- ---------------------------- 1344 UserHome Back up 1371 snapvaulted idle 0.0 mpo-vsim14:/UserHome/- mpo-vsim15:/UserHome/UserHome_mpo-vsim14_UserHome 1344 UserHome Back up 1373 snapvaulted idle 0.0 mpo-vsim14:/UserHome/adai mpo-vsim15:/UserHome/adai 1344 UserHome Back up 1375 snapvaulted idle 0.0 mpo-vsim14:/UserHome/vsv mpo-vsim15:/UserHome/vsv 1344 UserHome Back up 1377 snapvaulted idle 0.0 mpo-vsim14:/UserHome/amir mpo-vsim15:/UserHome/amir Step 1: This is to make sure that during the migration conformance/reaper doesn’t reap any relationship C:\>dfm options set dpReaperCleanupMode=Never Changed cleanup mode for relationships managed by protection capability of OnCommand to Never. C:\> Step 2: C:\>dfpm dataset suspend UserHome Suspended dataset UserHome. C:\> Step 3:Relinquish Relationships using DFPM From the DFM Server C:\>dfpm dataset list -m UserHome Id Node Name Dataset Id Dataset Name Member Type Name ---------- -------------------- ---------- -------------------- -------------------------------------------------- --------------------------- 1347 Primary data 1344 UserHome volume mpo-vsim14:/UserHome 1367 Backup 1344 UserHome volume mpo-vsim15:/UserHome C:\> dfpm dataset list -R UserHome Id Name Protection Policy Relationship Id State Status Hours Source Destination ---- --------- ------------------ -------------- ------------ ------- ----- ---------------------------- ---------------------------- 1344 UserHome Back up 1371 snapvaulted idle 0.0 mpo-vsim14:/UserHome/- mpo-vsim15:/UserHome/UserHome_mpo-vsim14_UserHome 1344 UserHome Back up 1373 snapvaulted idle 0.0 mpo-vsim14:/UserHome/adai mpo-vsim15:/UserHome/adai 1344 UserHome Back up 1375 snapvaulted idle 0.0 mpo-vsim14:/UserHome/vsv mpo-vsim15:/UserHome/vsv 1344 UserHome Back up 1377 snapvaulted idle 0.0 mpo-vsim14:/UserHome/amir mpo-vsim15:/UserHome/amir C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/UserHome_mpo-vsim14_UserHome Relinquished relationship (1371) with destination UserHome_mpo-vsim14_UserHome (1370). C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/adai Relinquished relationship (1373) with destination adai (1372). C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/vsv Relinquished relationship (1375) with destination vsv (1374). C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/amir Relinquished relationship (1377) with destination amir (1376). C:\> Step 4: Remove all Resources from Dataset using NMC/CLI From the NetApp Management Console Done in the NMC, edit the dataset, remove the physical resources first from the Backup Node, then from the Primary Node. Else do it using cli as follows C:\>dfpm dataset remove -N "Primary data" UserHome mpo-vsim14:/UserHome Dataset dry run results ---------------------------------- Do: Checking that dataset configuration conforms to its policy. Effect: Conformance checking failed. Reason: Dataset has been manually suspended. Suggestion: Click Resume on the Datasets window to reestablish protection job schedules. Dataset conformance status will be updated after you resume protection of this dataset Removed volume mpo-vsim14:/UserHome (1347) from dataset UserHome (1344). C:\>dfpm dataset remove -N "Backup" UserHome mpo-vsim15:/UserHome Dataset dry run results ---------------------------------- Do: Checking that dataset configuration conforms to its policy. Effect: Conformance checking failed. Reason: Dataset has been manually suspended. Suggestion: Click Resume on the Datasets window to reestablish protection job schedules. Dataset conformance status will be updated after you resume protection of this dataser Removed volume mpo-vsim15:/UserHome (1367) from dataset UserHome (1344). C:\> STEP 5: Relinquish the primary qtrees from the dataset using DFBM. From the DFM Server CLI C:\>dfbm primary dir list 1371 ID: 1371 Last Backup Status: Normal Primary Directory: mpo-vsim14:/UserHome/- Secondary Volume: mpo-vsim15:/UserHome Secondary Volume ID: 1367 State: SnapVaulted Lag: 29 mins Status: Idle Bandwidth Limit: None Custom Script: Run Custom Script As User: C:\>dfbm primary dir relinquish 1371 Relinquished control over mpo-vsim14:/UserHome/-. C:\> Repeat this for all relationships. Step6: Migrate the data using technique 1 from NetApp KB 1011499 Existing snapvault relationship: Source Destination State Lag Status mpo-vsim14:/vol/UserHome/- mpo-vsim15:/vol/UserHome/UserHome_mpo-vsim14_UserHome Snapvaulted 00:35:42 Idle mpo-vsim14:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai Snapvaulted 00:35:42 Idle mpo-vsim14:/vol/UserHome/amir mpo-vsim15:/vol/UserHome/amir Snapvaulted 00:35:42 Idle mpo-vsim14:/vol/UserHome/vsv mpo-vsim15:/vol/UserHome/vsv Snapvaulted 00:35:42 Idle Desired Snap vault relationship Source Destination State Lag Status mpo-vsim16:/vol/UserHome/- mpo-vsim15:/vol/UserHome/UserHome_mpo-vsim14_UserHome Snapvaulted 00:14:41 Idle mpo-vsim16:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai Snapvaulted 00:13:33 Idle mpo-vsim16:/vol/UserHome/amir mpo-vsim15:/vol/UserHome/amir Snapvaulted 00:27:23 Idle mpo-vsim16:/vol/UserHome/vsv mpo-vsim15:/vol/UserHome/vsv Snapvaulted 00:16:26 Idle Recipe # Create a new volume pri> vol create newvol aggr0 10g pri> vol restrict newvol pri> snapmirror initialize -S oldvol newvol mpo-vsim16> vol create UserHome aggr1 80m Creation of volume 'UserHome' with size 80m on containing aggregate 'aggr1' has completed. mpo-vsim16> vol restrict UserHome Volume 'UserHome' is now restricted. mpo-vsim16> snapmirror initialize -S mpo-vsim14:UserHome UserHome Transfer started. Monitor progress with 'snapmirror status' or the snapmirror log. # At time of cutover stop client access to oldvol and continue: pri> snapmirror update -S oldvol newvol pri> snapmirror break newvol mpo-vsim16> snapmirror break UserHome snapmirror break: Destination UserHome is now writable. Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off. mpo-vsim16> # Resume client access to newvol # Update the SnapVault relationship sec> snapvault modify -S pri:/vol/newvol/qtree sec:/vol/tradvol/qtree sec> snapvault modify -S pri:/vol/newvol/qtree2 sec:/vol/tradvol/qtree2 If you like not to wait for next update schedule can you update it yourself using the following cli sec> snapvault update /vol/tradvol/qtree Alternatively you can also use start which will do the job of both modify and update. Sec> snapvault start –r –S mpo-vsim16:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/- mpo-vsim15:/vol/UserHome/UserHome_mpo-vsim14_UserHome mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/amir mpo-vsim15:/vol/UserHome/amir mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/vsv mpo-vsim15:/vol/UserHome/vsv 7 Re-Discover Controllers C:\>dfm host discover mpo-vsim16 Refreshing data from host mpo-vsim9.mponbtme.lab.eng.btc.netapp.in (134) now. C:\>dfm host discover mpo-vsim15 Refreshing data from host mpo-vsim10.mponbtme.lab.eng.btc.netapp.in (135) now. C:\> Wait for svtimestamp to be populated on both source and destination. C:\>dfm detail mpo-vsim15 | findstr /i svTimestamp svTimestamp 2012-08-08 19:15:02.000000 STEP8: Add the physical resources back to the dfm using the DFBM From the DFM Server CLI C:\>dfbm primary dir add mpo-vsim15:/UserHome mpo-vsim16:/UserHome/- Started job 3608 to create backup relationship between mpo-vsim15:/UserHome and mpo-vsim16:/UserHome/-. C:\> This will not do a rebaseline as the relationship already exists we are only adding the same to dfm. C:\>dfbm job list 3608 Job: 3608 Status: Normal Time Started: 08 Aug 19:29 Description: Importing backup relationship between secondary mpo-vsim15:/UserHome and primary mpo-vsim16:/UserHome/-. Progress: Done Arguments: svsVolume=1367&svpHost=141&svpDir=%2FUserHome%2F-&svThrottle=0 C:\> Take careful note of the syntax here. The Secondary / SnapVault volume goes at the beginning, with no qtree. No /vol syntax either. The primary SnapVault source goes last, and includes the qtree. Step 9: Import the new relationships into the dataset. From NetApp Management Console Browse External Relationships and you will find both the new SnapVault. Import them to the correct points in the dataset. PS: You may see this error by nothing to worry. After importing the relationships, the dataset will show a status of Warning. When you investigate the warning, the dataset reports, "no backup history exists". To clear this warning event, either run an on-demand backup job or wait for the next scheduled backup to occur. Either way, the backup job will force an update of the relationship and create a new backup snapshot. Once the first backup for the imported relationship are succesful, the warning status will go away. 10 Resume the Dataset, Protect now, and Review jobs We resume the dataset from the NMC, and hit Protect Now with an hourly job. When reviewing the jobs, we notice a few failed jobs from 30 minutes prior, this may be due to adding the resources back into the dataset (the primary and tertiary) which may have been unnecessary Things to note as part of this migration. Some of things you should be aware of are the following. We lose the dynamic referencing, and there by auto relationship creation for qtree (for QSM and SV) which are created after the this migration process on the primary volume We also lose the ability of dynamic secondary sizing of the imported destination volumes as they are not marked dp_managed.(this may not happen in our case as it’s within the same dfm server) Though this could be overcome using the Secondary Space Management feature by migration of the destination volumes. Old backup version on the primary are lost ( with in PM but the actual snapshots exisit), as the primary volumes FSid would have changed. The snapshots associated with old backup version before the migration process would only be available via Ops Mgr Ui(Backup Manager Tab) and not via PM restore wizard. These old snapshot will not be deleted as per retention by PM instead needs to be deleted by users. Regards adai
... View more
Hi PM inst aware of snaprestore. The easiest way you can do this is update the relationship outside and then import them back to dataset. Regards adai
... View more
Hi Martin, When you say OnCommand Manager, can you be sepecif as to if its Unified Manager or System Manager ? Also if you could attach a screen shot it would help us. BTW what version of Data ONTAP are you running ? From what I understand you are using a VMFS datastore and not a NFS datastore. Regards adai
... View more
Hi All, Data ONTAP introduced a option in version 7.2.4 and later called cf.takeover.change_fsid which by default is ON. This causes FS ID of volume to change when failover or give back happens( like Metro Cluster). If a volume which is a part of Metro Cluster is also being backed up using Protection Manger in a Dataset, since DFM/PM tracks the uniqueness of a volume using FS ID, the dataset resource are removed during a cluster takeover or give back due to the option being turned ON by default. The culprit is the option cf.takeover.change_fsid. being turned ON. This options only affect volume which are just part of Metro Cluster and not Just any HA controller. Its suggested that volume from the controller like Metro Cluster are recommend not be added to a dataset, if this option is turned ON. Regards adai
... View more