Hi Kem, We just changed our Release model from FCS to GA to RC to GA to align with same release model as ONTAP. The RC version has undergone the same process as what a FCS goes through. Also there is some amazing features in 5.2 related to maintenance of the db which will help you in getting stale data entries removed and they by giving you performance improvement. I still strongly recommend you to upgrade to 5.2 RC1. Regards adai
... View more
Here is a sample report FilerName vFilerName SourcePathName Type The source path will contain either volume or qtree name and its distinguished using the type column as volume or qtree. BTW here is the definition of UnProtected that this script will generate. List all Unprotected Data: The definition of unprotected in this script is as follows.: A volume is un-protected only if its not VSM‘ed or none of its Qtrees are QSM’ed A Qtree is un-protected only if its neither part of volme that’s VSM’ed nor QSMed. If this is the exact definition of your unprotected and the sample report is what you need, then I will share the script that can generate the same. Regards adai
... View more
Hi Muhammad, Again this is not available but can be got using a API. IIRC i have a working script using the api to get a list of volumes/qtrees that are not protected by PM.Let me dig and get you something tomorrow. BTW what version of DFM are you using ? I suggest you upgrade to 5.2RC or atleast 5.1.D1. Regards adai
... View more
Hi Muhammad, Unfortunately there is no canned report or catalog with this details. The easiest way to get this is via script using the dfpm policy schedule and connection cli. Can you give me a temple report on what and how you want the report to be ? Is it just based on Each realtionship ? Each Dataset ? Each Protection Policy ? If you can send us a simple report with column names and value that you are expecting then we can quickly get you a script plugin to generate this report. The same can later be scheduled, emailed and exported in various forms as well. Regards adai
... View more
Hi Kem, You found the solution for your problem. Any release older than 4.0.2 doesnot recognize disk above 1TB. Refer this Bugs Online http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=429510 Minimum to solve this issue you will have to upgrade to 4.0.2 or 4.0.2D12 the last release in 4.0.x code line. I strongly recommend you to upgrade to 5.1/5.2RC1 Regards adai
... View more
BTW you are hitting this known issue describe in Bugs online. Can you pls create a case for the same and add it to the bug670808 http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=670808 Regards adai
... View more
Hi Hansen, There are two ways to solve this issue. If you move to OCUm 5.1 or later where we do Dynamic Secondary Sizing and no aggr sized volume creation. Here we will only create the VSM destination volume with same size as the snapvault destination. There by solving the issue of not creating aggrsized volumes. I strongly recommend you to upgrade to 5.1 or even 5.2RC1. If you still wish to stay in 5.0.1 then you will have to set the value of dfm option set pmAutomaticSecondaryVolMaxSizeMb=47081062.4 But pls not this option is global and will apply to all volume provisioning and resizing. Thereby if a need for a volume creation or resize beyond 44.9 TB will fail. This is not required and will not happen if you move 5.1/5.2RC. Pls not to turn the option OFFyou will have to do the following dfm option set pmAutomaticSecondaryVolMaxSizeMb=0 Regards adai
... View more
Hi Stephen, A secondary/Destination volume cant be member of more than 1 dataset. The reason being the retention and schedules of a relationship is managed at a dataset level So if a secondary volume is part of more than 1 dataset there would be conflict in terms of retention settings and schedule time. In a way, you answered your own questions. If you want to have different retentions and schedule times you will have to have different dataset so separate destination volumes If the retention and schedule times are same then you dont need multiple dataset. Also note having more number of relationship terminating in the same destination volume will lead to longer wait time for snapshot creation as all relationship needs to be in Idle state before snapvault snapshot creation. Regards adai
... View more
Hi KK, Sorry I missed one step. Once the relationships are relinquished you must remove the members in the following order primary members followed by secondary members. Then if you try you import you should be able to do it. Regards adai
... View more
Hi ART, SnapManager Dataset behaves little differently as they are mostly controller by the respective SM. If this had been a non-SnapManager or in NetApp term non-Application dataset, what you expect is possible. Regards adai
... View more
Hi Todd, Thanks again! I am a bit confused as to why admin privileges would be needed in the cDOT product, given that it performs only read activities. So far, here is the list of things I cannot do in cDOT that might have required admin privileges (in addition to SSH access): *dfm run cmd You are right. I did not say you would need admin privileges, all I said way we haven't tested with a limited capability user or there is no certified/minimum set of capabilities. Anything else I am missing? My customer and I are both running OCUM 5.x with a read-only custom user/role and seem to be able to perform all necessary reporting/alerting. We are planning to use WFA custom workflows for all provisioning and data protection activities (since no such capability exists in OCUM 5.x for cDOT). We also cannot do configuration management, so this tool strikes us as a read-only interface not needing admin privileges/role on the filer for ONTAPI or SNMP. I do see some possible use cases for admin privilege for dfm run cmd but if this is not required, I really cannot see a reason to provide such elevated permissions now that OCUM has effectively become a view only reporting tool with its current feature set. So, I will continue to steer my customer toward the supported configuration (full cluster admin rights), but at the moment I do not see what they will miss out on without them and agree that a read-only role seems sufficient based on my current lab investigation. Can you share your read only custom user, capabilities that you are using ? Regarding polling, we used to be able to tune the polling/retention intervals for 7-Mode with 'dfm perf data list' and 'dfm perf data modify' commands. DFM perf data list still works, but dfm perf data modify does not. Is there another easy way to increase the retention time and decrease the polling interval to something more WAN friendly under cDOT? Here is an example of some settings I have used in the past on 7-Mode. REM WAN Settings for %%r in (netappdr, netappdr2) do ( dfm perf data modify -G system -o %%r -s 30m -r 12week -f dfm perf data modify -G disk -o %%r -s 30m -r 12week -f dfm perf data modify -G aggregate -o %%r -s 30m -r 12week -f dfm perf data modify -G ifnet -o %%r -s 5m -r 12week -f dfm perf data modify -G nfsv3 -o %%r -s 30m -r 12week -f dfm perf data modify -G nfsv4 -o %%r -s 30m -r 12week -f dfm perf data modify -G prisched -o %%r -s 30m -r 12week -f dfm perf data modify -G target -o %%r -s 30m -r 12week -f dfm perf data modify -G lun -o %%r -s 30m -r 12week -f dfm perf data modify -G volume -o %%r -s 30m -r 12week -f dfm perf data modify -G cifs -o %%r -s 30m -r 52week -f dfm perf data modify -G fcp -o %%r -s 30m -r 12week -f dfm perf data modify -G iscsi -o %%r -s 30m -r 52week -f dfm perf data modify -G vfiler -o %%r -s 30m -r 12week -f dfm perf data modify -G processor -o %%r -s 5m -r 12week -f dfm perf data modify -G perf -o %%r -s 30m -r 52week -f dfm perf data modify -G priorityqueue -o %%r -s305m -r 12week -f dfm perf data modify -G wafl -o %%r -s 30m -r 12week -f dfm perf data modify -G qtree -o %%r -s 30m -r 12week -f ) dfm perf data list They no longer work in cDOT. Any suggestions appreciated: Unfortunately the perf advisor capability in 5.1/5.2 for clustered data ontap is limited. When you meant customized polling I thought about general monitoring using API/SNMP like capacity, quota etc. Pls take a look at this GSS video on what is the capabilites supported in Cluster-Mode of Performance Advisor. Performance Advisor Features in OnCommand Unified Manager 5.1 for clustered Data ONTAP Also take a look at this table on what is supported and not supported for cluster-Mode in 5.1 What's New in OnCommand Unified Manager 5.1 Release Regards adai
... View more
Hi Jeff, The reason you get the event is that you didnt relinquish the relationship from your dataset. The dfpm dataset remove is same as removing destination volume from NMC. Pls use dfpm dataset relinquish first then followed by NMC removal of secondary volume or dfpm dataset remove both are same. BTW use version 5.2, there is feature that purges all events older than 180days or value set in eventPurgeInterval. Regards adai
... View more
Hi Sean, You are correct.And I agree. If you go with limited capabilities, you will encounter problems with performance advisor, or protection manager functionality. Also OCUM uses ssh for some cases where there is lack of API or SNMP. BTW if you wish you can start creating a role with all read-capabilities and based on trial and error keep adding them untill you don't get any error. But the next version of ONTAP may change some of these and you will have to redo this exercise again just incase there are ONTAP changes. Regards adai
... View more
Hi Syed, ONTAP 8.1.2 is not supported on 5.0.2. Pls use OCUM 5.2 or 5.1 in which ONTAP 8.1.2 is supported. As mentioned by niels all version of OCUM supports monitoring of AV using Performance Advisor Counters. Regards adai
... View more
Hi Todd, The OCUM 5.2 is supported only with cluster admin users. There is no tested or certified user with least privileges. Also the default polling interval is already pre shipped with the product. Alerts are up to the customer to create what they like to be alerted or they consider critial. Regards adai
... View more
Hi Jeff, dfbm primary dir relinquish removes the entry from the database, but the next snapvault monitoring cycle will rediscover it again. Whereas dfpm dataset relinquish will make the dataset to relinquish its control over it like scheduling the same doing conformance check on the same Using dfbm will lead to dataset showing redundant relationship error. dfpm dataset remove, removes the secondary volume from the dataset. Then conformance kicks in and finds that there is a primary and no secondary but as per the protection policy this needs one. So it goes head and provisions and new secondary volume and does the re-baseline from primary. Hope this helps. Regards adai
... View more
Hi Jeff, Here is the details steps on how to achieve the same. My dataset details: Dataset Details: ++++++++++++ C:\>dfpm dataset list -x 462 Id: 462 Name: moreThan255 Protection Policy: Back up Application Policy: Description: Owner: Contact: Volume Qtree Name Prefix: Snapshot Name Format: %T Primary Volume Name Format: Secondary Volume Name Format: Secondary Qtree Name Format: DR Capable: No Requires Non Disruptive Restore: No Node details: Node Name: Primary data Resource Pools: Provisioning Policy: Time Zone: DR Capable: No vFiler: Node Name: Backup Resource Pools: mpovsim14Rp Provisioning Policy: dedupe Time Zone: DR Capable: No vFiler: Primary and Secondary Volume Details: +++++++++++++++++++++++++++++++++++ C:\>dfpm dataset list -m 462 Id Node Name Dataset Id Dataset Name Member Type Name ---------- -------------------- ---------- -------------------- -------------------------------------------------- -------------------------------------- 446 Primary data 462 moreThan255 volume mpo-vsim13:/noVolLang 464 Backup 462 moreThan255 volume mpo-vsim14:/noVolLang Relationship Details: ++++++++++++++++++ C:\>dfpm dataset list -R 462 Id Name Protection Policy Provisioning Policy Relationship Id State Status Hours Source Destination ---------- --------------------------- --------------------------- ------------------- --------------- ------------ ------- ----- ---------------------------- ---------------------------- 462 moreThan255 Back up 468 snapvaulted idle 0.1 mpo-vsim13:/noVolLang/two mpo-vsim14:/noVolLang/two 462 moreThan255 Back up 470 snapvaulted idle 0.1 mpo-vsim13:/noVolLang/one mpo-vsim14:/noVolLang/one 462 moreThan255 Back up 472 snapvaulted idle 0.1 mpo-vsim13:/noVolLang/- mpo-sim14:/noVolLang/moreThan255_mposim13_noVolLang Now the steps to get new secondary volume created: +++++++++++++++++++++++++++++++++++++++++++++++ Relinquish the relationships using dfpm dataset relinquish cli so that PM stops scheduling the relationships anymore Remove the secondary volume usings dfpm dataset remove cli, this way the old secondary volume where we reached 250 snapshot is removed and a new secondary volume is created with re-baseline from primary. This way you still have the relationship between your primary and old secondary( though this is not a requirement for PM to restore, also PM will no more update this relationship). STEP 1: C:\>dfpm dataset relinquish mpo-vsim14:/noVolLang/moreThan255_mpo-vsim13_noVolLang Relinquished relationship (472) with destination moreThan255_mpo-vsim13_noVolLang (471). C:\>dfpm dataset relinquish mpo-vsim14:/noVolLang/one Relinquished relationship (470) with destination one (469). C:\>dfpm dataset relinquish mpo-vsim14:/noVolLang/two Relinquished relationship (468) with destination two (467). As you can see that the dataset no more shows up this relationship. C:\>dfpm dataset list -R 462 Id Name Protection Policy Provisioning Policy Relationship Id State Status Hours Source Destination ---------- --------------------------- --------------------------- ------------------- --------------- ------------ ------- ----- ---------------------------- ---------------------------- C:\> STEP 2: C:\>dfpm dataset remove -N Backup 462 mpo-vsim14:/noVolLang Dataset dry run results ---------------------------------- Do: Provision flexible volume (backup secondary) of size 26.4 MB Effect: Provision a new flexible volume of 26.4 MB from aggregate 'mpo-vsim14:aggr1'(321). Do: Enable deduplication on flexible volume. Effect: Enable deduplication on flexible volume 'VolToBeProvision:moreThan255' (29) Do: Create backup relationship(s) for dataset 'moreThan255' (462) on connection 1. Effect: Create backup relationship(s) between 'mpo-vsim13:/noVolLang/two' and new volume to be provisioned from resource pool(s) 'mpovsim14Rp' (396). Create backup relationship(s) between 'mpo-vsim13:/noVolLang/one' and new volume to be provisioned from resource pool(s) 'mpovsim14Rp' (396). Create backup relationship(s) between 'mpo-vsim13:/noVolLang/-' and new volume to be provisioned from resource pool(s) 'mpovsim14Rp' (396). Removed volume mpo-vsim14:/noVolLang (464) from dataset moreThan255 (462). C:\> If you wish to check what this cli will do before actually running it,you can do a dry run with the switch -D C:\>dfpm dataset remove -D -N Backup 462 mpo-vsim14:/noVolLang Dataset dry run results ---------------------------------- Do: Provision flexible volume (backup secondary) of size 26.4 MB Effect: Provision a new flexible volume of 26.4 MB from aggregate 'mpo-vsim14:aggr1'(321). Do: Enable deduplication on flexible volume. Effect: Enable deduplication on flexible volume 'VolToBeProvision:moreThan255' (21) Do: Create backup relationship(s) for dataset 'moreThan255' (462) on connection 1. Effect: Create backup relationship(s) between 'mpo-vsim13:/noVolLang/two' and new volume to be provisioned from resource pool(s) 'mpovsim14Rp' (396). Create backup relationship(s) between 'mpo-vsim13:/noVolLang/one' and new volume to be provisioned from resource pool(s) 'mpovsim14Rp' (396). Create backup relationship(s) between 'mpo-vsim13:/noVolLang/-' and new volume to be provisioned from resource pool(s) 'mpovsim14Rp' (396). STEP3: Now you can see only the relationship of the new secondary volume. C:\>dfpm dataset list -R 462 Id Name Protection Policy Provisioning Policy Relationship Id State Status Hours Source Destination ---------- --------------------------- --------------------------- ------------------- --------------- ------------ ------- ----- ---------------------------- ---------------------------- 462 moreThan255 Back up 477 snapvaulted idle 0.0 mpo-vsim13:/noVolLang/one mpo-vsim14:/noVolLang_1/one 462 moreThan255 Back up 480 snapvaulted idle 0.0 mpo-vsim13:/noVolLang/two mpo-vsim14:/noVolLang_1/two 462 moreThan255 Back up 481 snapvaulted idle 0.0 mpo-vsim13:/noVolLang/- mpo-vsim14:/noVolLang_1/moreThan255_mpo-vsim13_noVolLang You can see the relationship of the primary with the old secondary and new secondary using dfpm relationship list cli. C:\>dfpm relationship list Relationship Id Relationship Type Dataset Id Dataset Name Source Destination Deleted Deleted By --------------- -------------------- ---------- -------------------- ---------------------------------------- ---------------------------------------- ------- ---------- 468 snapvault 0 mpo-vsim13:/noVolLang/two mpo-vsim14:/noVolLang/two No 470 snapvault 0 mpo-vsim13:/noVolLang/one mpo-vsim14:/noVolLang/one No 472 snapvault 0 mpo-vsim13:/noVolLang/- mpo-vsim14:/noVolLang/moreThan255_mpo-vsim13_noVolLang No 477 snapvault 462 moreThan255 mpo-vsim13:/noVolLang/one mpo-vsim14:/noVolLang_1/one No 480 snapvault 462 moreThan255 mpo-vsim13:/noVolLang/two mpo-vsim14:/noVolLang_1/two No 481 snapvault 462 moreThan255 mpo-vsim13:/noVolLang/- mpo-vsim14:/noVolLang_1/moreThan255_mpo-vsim13_noVolLang No You can also see all the Backup Version of both the new and old secondary volume and use the Restore Wizard as well. C:\>dfpm backup list 462 Backup Id Backup Version Retention Type Retention Duration (in seconds) Node Name Description Properties(Name=Value) --------- --------------------- -------------- --------------------------------- -------------------- ----------------------------------- ------------------------- 278 10 May 2013 01:30:55 monthly 8467200 Backup 275 10 May 2013 01:29:15 monthly Primary data 276 10 May 2013 01:28:14 weekly 4838400 Backup 274 10 May 2013 01:28:10 weekly 1209600 Primary data 272 10 May 2013 01:11:09 daily 1209600 Backup 271 10 May 2013 01:11:05 daily 604800 Primary data 270 10 May 2013 01:05:08 daily 1209600 Backup 269 10 May 2013 01:05:05 daily 604800 Primary data 268 10 May 2013 01:01:19 unlimited Backup BTW we use API restore in case of Application dataset and its better to leave the relationship with the old secondary volume. In most cases we use NDMP for restore. Hope this helps. Regards adai
... View more
Hi Jeff, Pls upgrade to 5.0.2P1 or even 5.2. This is not tested or supported. Pls see the below public report if it helps. http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=524546 Regards adai
... View more
Sean, We dont have certified user with required capabilities enabled to do the ontap side work. What I have seen in my experience is that, many users create local users on the filer that belongs to admin group like dfmuser( essentially with root capabilites) to login to ontap via dfm. Regards adai
... View more
Hi, Firstly, I suggest you move to version 5.0.2P1 or 5.2. In 5.0 and later we have view for Events which can give you what you are looking for. Also instead of using dfm event list, use the report cli instead. dfm report view events-history <volume id> Regards adai
... View more
Hi, Yes pmMaxSvRelsPerSecondaryVol options is global. Use fan-In to control the nubme of destination volumes. Even that is global. Can you give an example of your primary layout, like how many lun per volume and qtree ? There is no PM way of migrating a qtree. PM can only migrate a full volume. Regards adai
... View more
Hi Ken, How did you migrate ? Can you describe the steps you performed ? What was the error or problems that you encountered ? Regards adai
... View more