Hi Brian, You should look at the dfm.mib inside the following folder. <installationdirectory>/NTAPdfm/misc. It contains the TRAPID for each of the events generated by dfm.This is used when dfm is made as a trap sender to third party tools like IBM Tivoli and HP OpenView. BTW what are you trying to do with the OID.Can you pls explain what is your requirement. Regards adai
... View more
Based on our Sizing and testing we found FireFox to be better than IE in most cases Though the difference is less than .5x or even below. Regards adai
... View more
For each dataset edit and commit, conformance is triggered.So if there is a need for relationships to rebase-line yes, then it will get triggered on edit commit. Can you elaborate your question. What do you mean, by existing running Protection Manager Snapvault dataset ? Regards adai
... View more
Hi Shiva/Earls, I suspect a regression. As per the functionality of reaper clean up mode this what its supposed to do when its value is orphan. The default state for dpReaperCleanupMode will be Orphans which will only allow the reaper process to delete orphan relationships from non-imported relationships that are no longer in a dataset. Imported relationship are not removed by repear cleanup mode except when its value is Automatic. Regards adai
... View more
Hi Simon, If I create a dataset and protect them with a policy, it is only possible to specify a Volume on the FAS2020 as destination... Is it required to create a volume for every dataset/policy? Not required.A secondary volume can backup upto 50 ossv path.Example, if you add a windows box to the dataset, which has 2 drive, and 1 systemstate then PM creates 3 qtrees inside the secondary volume.Like this one secondary volume accommodate upto 50 qtrees or 50 ossv 50. When the 51st path comes in the same dataset the second volume is created. Is it required to create a volume for every dataset/policy? If they belong to a different dataset, yes each dataset will create one secondary volume. As one destination volume cannot be a member of multiple dataset. If i study the command line commands from OSSV/Snapvault, it seems to be possible to set a qtree as a destination? True that cli allows, but PM doesn't because, in case of say same volume being shared across 2 dataset, which have different schedule times and different retention settings for backup. Then there would be conflict and that's the reason PM doesn't allow you to use same volume across multiple datasets. Regards adai
... View more
Hi Reide, Can you get the output of the following commands ? Dfm options list | findstr /I dp Dfpm relationship list –x and highlight the deleted ones. Did you change any other options ? Having dfm diag output from the server will help. Regards adai
... View more
Error: Can't connect to host (err=10061). The error usually means the call is failing at the DFM server. Can you get the o/p of 1. dfm service list 2. dfm option list | findstr /I http Regards adai
... View more
As all of us know, today none of the Manageability tools provide setup and management of shares & exports in a vFiler. Neither, Filer View nor SM2.0 does this, which are our basic device management tools. Same is the case with our Provisioning Manager, where one can setup cifs in a vfiler, but not create shares or exports unless its part of a dataset. Also modifying shares or exports in still not possible with Prov Mgr even if they are part of dataset. I am trying to see if WFA can solve this(rather bridge this disconnect.) Regards adai
... View more
This is a valid error we get when volume runs of the space. See if there is any message like this in the filer EMS. [wafl.vol.full:notice]: file system on volume <volume name> is full Regards adai
... View more
Hi Muhammad, I am not clear on the scenario.Let me see if I understood it correctly. You have a dataset with Mirror policy. As below. C:\>dfpm dataset list -m 214 Id Node Name Dataset Id Dataset Name Member Type Name ---------- -------------------- ---------- -------------------- -------------------------------------------------- ------------------------------------------------------- 215 Primary data 214 NfsOnly volume sim1:/SourceOnly 220 Mirror 214 NfsOnly volume sim2:/MirrorData C:\> Your primary is deleted, in the filer Now you are trying to backup from Mirror node. Is this correct ? BTW what is the output of dfpm dataset list -m <dsid> now when the primary is deleted. Regards adai
... View more
Hi Vivek, Pls reach out to NGS for the same.As it might require detailed analysis. BTW try and check if this option is enabled httpd.admin.enable Regards adai
... View more
Can you give us the full job details output ? Is all the dfm service running ? Is the connection between dfm and filer fine ? Can check the same with dfm host diag <filername/ip/id> Regards adai
... View more
Hi WH, This is what I did. My snmp setting on the filer. sim1> snmp contact: location: lab-DC authtrap: 0 init: 1 traphosts: 192.168.98.10 (192.168.98.10) <192.168.98.10> community: ro public sim1> My DFM settings C:\>dfm options list snmpTrapListenerEnabled Option Value ----------------------- ------------------------------ snmpTrapListenerEnabled Yes C:\> Then created a volume in the filer. sim1> vol create auto_grow_test -s none aggr1 25m Creation of volume 'auto_grow_test' with size 25m on containing aggregate 'aggr1' has completed. sim1> Turned on autosize and below are its details. sim1> vol autosize auto_grow_test Volume autosize is currently ON for volume 'auto_grow_test'. The volume is set to grow to a maximum of 40 MB, in increments of 5 MB. sim1> Discovered that volume in dfm, forcefully instead of waiting by issuing the below cli. dfm host discover <filerid/name> Then kept writing to the volume such that it will autogrow. Below are the messages from the filer. sim1*> df -h auto_grow_test Filesystem total used avail capacity Mounted on /vol/auto_grow_test/ 26MB 21MB 4768KB 82% /vol/auto_grow_test/ /vol/auto_grow_test/.snapshot 6756KB 0KB 6756KB 0% /vol/auto_grow_test/.snapshot sim1*> sim1*> Tue Jul 26 13:23:07 GMT [wafl.vol.autoSize.done:info]: Automatic increase size of volume 'auto_grow_test' by 5120 kbytes done. sim1*> Tue Jul 26 13:23:17 GMT [wafl.vol.autoSize.done:info]: Automatic increase size of volume 'auto_grow_test' by 2048 kbytes done. sim1*> Tue Jul 26 13:23:30 GMT [wafl.vol.autoSize.fail:info]: Unable to grow volume 'auto_grow_test' to recover space: Volume cannot be grown beyond maximum growth limit Tue Jul 26 13:24:00 GMT [monitor.globalStatus.nonCritical:warning]: /vol/auto_grow_test is full (using or reserving 100% of space and 9% of inodes, using 100% of reserve). Below is the report in DFM where I see the auto-size event. C:\>dfm report view events-history 228 Severity Event ID Event Triggered Ack'ed By Ack'ed Del ----------- -------- -------------------------------------------------------------- ------------ --------- ------------ --- Error 325 Volume Full 26 Jul 13:20 Information 324 Volume Autosized 26 Jul 13:20 Information 322 Volume Autosized 26 Jul 13:20 Information 319 Volume Autosized 26 Jul 13:18 Warning 317 Volume Almost Full 26 Jul 13:18 Information 315 Volume Autosized 26 Jul 13:18 Normal 313 No Schedule Conflict between snapshot and snapvault schedules 26 Jul 13:18 Normal 312 No Schedule Conflict between snapshot and SnapMirror schedules 26 Jul 13:18 Normal 311 Snapshots Age: Normal 26 Jul 13:18 Normal 310 Snapshots Count: Normal 26 Jul 13:18 Information 309 Volume Autosized 26 Jul 13:18 Information 307 Volume Autosized 26 Jul 13:18 Normal 304 Volume Space Reserve OK 26 Jul 13:17 Normal 303 Volume Next Snapshot Possible 26 Jul 13:17 Normal 302 Volume First Snapshot OK 26 Jul 13:17 Normal 301 Inodes Utilization Normal 26 Jul 13:17 Normal 300 Volume Space Normal 26 Jul 13:17 Normal 299 Scheduled Snapshots Enabled 26 Jul 13:17 Normal 298 Volume Online 26 Jul 13:17 C:\> Where 228 is the volume id in dfm. Hope this helps. Regards adai
... View more
Also check the following command in the filer. sim1> snmp contact: location: Lab-DC1 authtrap: 0 init: 1 traphosts: 192.168.98.10 (192.168.98.10) <192.168.98.10> community: ro public sim1> Make sure your DFM server is under traphosts and init is set to 1.
... View more
Hi Scott, This behavior is expected even though its not consistent between cli and NMC. The NMC delete of host was added in 4.0.1 by bug 255202 http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=255202 You are hitting bug 433740. If you like to have consistent behavior pls add your case to this bug. Regards adai Thanks to Kevin Ryan,
... View more
Hi Scott, After you deleted the vfilers from NMC are they still listed in dfm UI or cli ? Does the dfm vfiler list show this output ? Regards adai
... View more
Hi Xavier, Still the same case remains. Pls follow my response dated Oct 15, 2009 12:15 AM to work with compression in Protection Manager. Regards adai
... View more
Hi Adrian, Here is what you need to do in Protection Manager. I have answered for creator of this thread too. For you directly use Step 2. Step 1:Prevents PM’s reaper cleaning up any relationship. Set the following options as below before doing and reset it back to orphans once done. dfm options set dpReaperCleanupMode=Never Step 2:Relinquish the Primary member and the secondary member. Use the dfpm dataset relinquish This will mark the relationship as external and PM will no longer manage(schedule jobs) the relationship. Now remove the primary and secondary from the dataset. Either using NMC UI Edit Dataset Wizard or using dfpm dataset remove cli. First remove the primary member then remove the corresponding secondary member. Step3:Discovering as External Relationships. You must see this relationship as external in the External Relationship tab. If you don’t see it, close and re-login to NMC again, Step4:Importing to a new dataset. Create a new dataset with required policy and schedule. Or choose the dataset where you want to import this relationship to. Use the Import wizard and import them. Step 5: dfm options set dpReaperCleanupMode=orphans. Points to take care: 1. If an entire OSSV host was added as a primary member, and now moved to a new dataset.(the step 2, relinquishing the primary member needs to be done for each dir/mntpath of the OSSV host.).The same applies for a volume which is added as a primary member and now moved to a new dataset. 2. After importing the dynamic referencing of the OSSV host is lost as we import each individual relationships.The same applies for volumes too, now you will start seeing individual qtrees as primary members as opposed to volume. 3. So when a new dir/mnt path is added to the OSSSV host, admin has to manually add it to the dataset.The same applies for new qtrees too. 4. To restore from Old backup version the use must go back to the old dataset as they are not moved over. Note: When you relinquish the relationship, it may not show-up in the "External Relationship Lag" box of the Dashboard view. However, if you go to Data -> External Relationships it will be listed there. When you import the relationship into a new dataset, it will show an error status of "Baseline error". Simply run an on-demand backup job and it will clear this error. Note: The backup job doesn't perform a re-baseline. It simply does a Snapvault update. Don't delete the old dataset if its empty. As adai stated, it has the backup history of the relationship before you moved it. So if you want to perform a restore from before the move, you need to restore from the old dataset. Once all the backups have expired from the old dataset, you can destroy it. Regards adai
... View more
Mike pls use the same script.That you referenced. Or use the cygwin as suggested by sinhaa to use the oneliner in the same post you referenced. Gireesh, pls dont re-invent the wheel, the script for ack and delete is already there. Regards adai
... View more
I used to do this for my test setups. dfm service stop. move/delete monitor.db and log. dfm services setup -l <use core license> dfm service start. Regards adai
... View more