Hi Muhammad, The options is a hidden one so until you set this option you cant list it. PS: Setting this option will prevent PM from creating a volume greater than the value of this options. Also this is global option and applies to all. Regards adai
... View more
Hi Muhammad, As kevin said, the solution is to upgrade to OCUM 5.1 or 5.2 as we have made Dynamic Secondary Sizing for Volume SnapMirror as well. But until you complete all your certification or if you are waiting for a GA Release you can follow the workaround in the known issue that you are hitting. I suggest you open a case and add your case to this bug 670808. Here is the link to the bug detail that gives the workaround. http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=670808 Regards adai
... View more
Hi Marc, FIrst of all, pls upgrade to 5.0.2P1 or later, as its the current GA release and also true 64bit application. 4.0.2 is already End of Support with the release of OCUm 5.2. I would also recommend you to move to 5.2 as soon as possible. Now coming to your question. Using PM you cannot create policy topology of your own other than the predefined one. In fact we do have a policy which does the following, similar to what you are looking for except that the second backup is VSM or QSM. Backup And Mirror Or Mirror to 2 destination Alternatively you can achieve what you are looking for by using the simple backup policy twice on the same source with 2 set of destination. You will end up creating 2 dataset for the same primary with Backup policy and different destination volume. Also in one of the dataset with backup policy make the local backup schedule to none so that you don't have twice the number of snapshot being taken and retained in the primary. So you will have 2 set of backup destination for the same primary but in 2 datasets instead of 1 due to the policy topology restrictions. Hope this helps. Regards adai
... View more
Hi Colin, There is no standard or custom report that gives you this information. Though you can write a simple script to get the info you are looking for using dfm cli. Regards adai
... View more
Hi Michael, Pls try this workaround and let me know if you still face this issue. This issue is generally seen when the temporary directory for SYSTEM account is missing. Create the temporary directory from system explorer. On Windows 2008 64-bit edition, create the directory by name "temp" with "default" permissions at the following path: 'C:\Windows\SysWOW64\config\systemprofile\AppData\Local\' Regards adai
... View more
Hi Regis, So I guess, it means that there should be always 1 weekly snapshot available on the primary node. Is "1" the default value for weekly snapshots on Primary Data ? Yes. Thats what my memory says. If I understand well, (if the retention count for weekly snapshot is 1 as well on the customer site), in our case where this weekly snapshot was a on-demand snapshot and there is no schedule for weekly snapshots, it means: - it would stay forever unless we generate another snapshot or we change the retention count to 0 . Am I right ? Yes. Even after you generate another weekly snapshot, or Set the count to 0. It would still remain until 14 days since it was created as thats the duration count. Regards adai
... View more
Hi Regis, What is the retention count ? The retention count is only available via CLI use the following cli to find the same. Only when both duration and count is exceed PM deletes a snapshot or backup version. Basically PM does a least aggressive deletion of snapshots. dfpm policy node get -q <policy name-or-id> Also below is the algorithm that PM uses to delete backup versions. It is not obvious how Protection Manager determines when to delete a Protection Manager-created snapshot. Here is the algorithm for deleting PM (Protection Manager) created snapshots. Each PM created snapshot is categorized by PM as either a daily, hourly, weekly, monthly or unlimited. For each category except unlimited, there is a minimum retention count and a retention duration. These settings are used to determine if old or expired snapshots will be deleted. Are there more snapshots than the retention count? No: Do not delete any snapshots. Exit Yes: Continue to Step 2 Are there any snapshots older than the retention period? No: Do not delete any snapshots. Exit. Yes: Create list of candidate-to-be-deleted snapshots that exceed the retention period. Go to Step 3. Loop through list of candidate-to-be-deleted snapshots. Is this snapshot busy? No: Delete it. Yes: Do not delete it. This algorithm is started by the Protection Manager conformance checker and is run on all data sets that are not suspended. The algorithm is run for all the snapshot categories except unlimited. Unlimited snapshots do not ever get deleted by this algorithm. Below is an example policy. There are retention counts and retention durations, per policy node, for hourly, daily, weekly and monthly. > dfpm policy node get example_policy Node ID: 1 Node Name: Primary data Hourly Retention Count: 2 Hourly Retention Duration: 86400 Daily Retention Count: 2 Daily Retention Duration: 604800 Weekly Retention Count: 1 Weekly Retention Duration: 1209600 Monthly Retention Count: 0 Monthly Retention Duration: 0 Backup Script Path: Backup Script Run As: Failover Script Path: Failover Script Run As: Snapshot Schedule Id: 0 Snapshot Schedule Name: Warning Lag Enabled: Yes Warning Lag Threshold: 129600 Error Lag Enabled: Yes Error Lag Threshold: 172800 Node ID: 2 Node Name: Backup Hourly Retention Count: 0 Hourly Retention Duration: 0 Daily Retention Count: 2 Daily Retention Duration: 1209600 Weekly Retention Count: 2 Weekly Retention Duration: 4838400 Monthly Retention Count: 1 Monthly Retention Duration: 8467200 Regards adai
... View more
Hi Keith, Yes, you can do this in case of primary provisioning policy which has ability to run post provisioning script. But unfortunately secondary provisioning policy does not have the capability to invoke post provisioning script. I will add you to an internal email regarding the same. Regards adai
... View more
Yes. So if I understand your scenario, you need to split the single server managing say 500 dataset into 250 each ? Is that right ? If your answer is yes I will then send/upload you the procedure. Regards adai
... View more
Hi Tony, Can you check if there is TRAP defined for the same ? If so then Craigs suggestion of enabled DFM host as trap receiver should get you this event. The only down side is that all event raised for traps are of the severity Information and they dont show up in the events report. Pls use events-history report to view the same or change the trap severity to make it show up in the normal report. The default event severity definition in OCUM for Traps is Information. But the same can be modified. Default Severity Definition for all trap events is Information : +++++++++++++++++++++++++++++++++++++++++++++++++ C:\>dfm eventtype list | findstr /i trap-received alert-trap-received Information alert-trap-received critical-trap-received Information critical-trap-received emergency-trap-received Information emergency-trap-received error-trap-received Information error-trap-received information-trap-received Information information-trap-received notification-trap-received Information notification-trap-received warning-trap-received Information warning-trap-received C:\> Modify them as follows: ++++++++++++++++++++ C:\>dfm eventtype modify -v Warning alert-trap-received Modified event "alert-trap-received". C:\>dfm eventtype modify -v Critical critical-trap-received Modified event "critical-trap-received". C:\>dfm eventtype modify -v Emergency emergency-trap-received Modified event "emergency-trap-received". C:\>dfm eventtype modify -v Error error-trap-received Modified event "error-trap-received". C:\>dfm eventtype modify -v Warning warning-trap-received Modified event "warning-trap-received". C:\> Now you will receive the trap with appropriate severity: ++++++++++++++++++++++++++++++++++++++++++++++++ C:\>dfm eventtype list | findstr /i trap-received alert-trap-received Warning alert-trap-received critical-trap-received Critical critical-trap-received emergency-trap-received Emergency emergency-trap-received error-trap-received Error error-trap-received information-trap-received Information information-trap-received notification-trap-received Information notification-trap-received warning-trap-received Warning warning-trap-received The drawback or side effect: ====================== When a trap other than information severity is generated the object status of the filer changes from green to orange, yellow or red. And even when the condition is rectified the object status doesn’t return back to green as there is no neutralization event. Now to overcome and make the object status green resolve the event by clicking on the resolve now. Regards adai
... View more
Hi Rabe, The statusMap table has a mapping of the object status (event, controller, volume, … ) against a numerical constants. The monitor sets the objStatus for each of the object to values for ‘statusName’. The source uses the statusIndex rather than the text field. All the numerical values are powers of 2 The possible values for this field are as follows: "statusName","statusIndex" "Normal","16" "Warning","48" "Unknown","32" "Error","64" "Critical","80" "Emergency","96" "N/A","24" "Information","28" Status Name signifies the status of the object as determined by the dfmmonitor. Regards adai
... View more
Hi Keith & Stading, There is no version of OCUM/Provisioning manager that allows you to set WAFL compression. But all version of Provisioning Manager allows you to set dedupe. Regards adai
... View more
Hi, There is a default role with read-onl capabilities called GlobalReadOnly, any user with this role can only do listing and not anything more. Regards adai
... View more
Hi Geert, Engg is actively working on this and have found some root causes as well. To reconfirm the same and also to understand, that you are also impacted by the same problem can you help us answer the questions below ? What is the value of option dpMaxFanInRatio What is the value of option dpDynamicSecondarySizing Are you provisioning the secondary (Backup) Are you provisioning the tertiary (Mirror) What type of job are you seeing failures for (relationship creation or transfer/on-demand or scheduled backup/mirror) Are you using an OSSV system Is dedupe enabled on their volumes How many source volume are being backed up. Can you give the output of the job that failed dfpm job detail <job id> Regards adai
... View more
Hi Jean-Christophe, The reason is in case of dynamic Secondary sizing it looks like we are using the projected size(the value of the options pmOSSVDirSecondaryVolSizeMb) and not the actual size of thesnapvault destination volume for Volume SnapMirror Destination volumes. For OSSV destination volume we always create aggr size volume if we have free space as per the value of the option pmOSSVDirSecondaryVolSizeMb. Can you set the value of this options to the size of the secondary volume created for OSSV destination volume ? BTW I have created bug711018 for the same can you create a case and add to it ? Regards adai
... View more
Hi Reid, We now have a Getting started video that explains in detail how our resource selection algorithm works. Below is the link to the same. Using OnCommand Unified Manager 5.1 - 7 Mode Provisioning Enhancements Regards adai
... View more
Hi Muhammad, Thanks for the sample/ template of the report that you are looking for. But unfortunately we will not create it in powershell, instead in perl to make the script OS agnostic. We will post a working script some time later mid next week. Regards adai
... View more
Hi Muhammand, The script is only in perl, for the sole reason that its OS agnostic and we can leverage it at multiple customer. If you want the same in powershell you will have to write your own using the logic of the perl script. The more ROI is on perl than powershell at least for the large adoption of this script. Regards adai
... View more
Hi Sean, As I said earlier, whenever there is a deficiency in the api, we use the cli to collect some monitoring data. In order to do that we need ssh capability to login to the controller and cli capability to execute this command. Long time back during DOT 7G a colleague and I worked on this for a large NetApp customer. At that time we created a KB1011412. Though we titled it as ReadOnly strictly speaking its not readonly as it has system-cli capabilities. Regards adai
... View more
Thats awesome BABA Rick . very quick turnaround. BTW does your code take care of the unprotected definition in my earlier post ? Regards adai
... View more