Pls use the cli to create or modify the alarm with the script path. This is a known issue and pls open a case with netapp support center and add it to bug 666431 Regards adai
... View more
Hi Dave, Are you saying dfm host diag cli is failing ? This options explains clearly what this warning means and why this warning and how it can be resolved. processHostIP: The value of this option determines how dfm handles a mismatch in the IP address of a host stored in dfm database with that returned by DNS. Possible values are off, warn (the default), update and error. Update can be used only for OSSV hosts. If the value is off, dfm will ignore the IP address mismatch. If the value is warn, a warning will be generated about the IP address mismatch, should one exist. If the value is update, dfm will update its database with the IP address returned by DNS. If the value is error, dfm will throw an error reporting the IP address mismatch,s should one exist. Hope this helps and solves your problem Regards adai
... View more
Hi Craig, I am little confused and not clear on your question. The windows CIFS client report exactly what your df -h says. As it doenst know about dedupe Space Saving. Could you clearly state what the difference that you are looking ? Regards adai
... View more
Hi Martin, I have already given you how to do that. Pls use the ZEDI and the following api eventclass-add-custom to create a new custom event. The ZEDI tool has the api, documentation and means to run it on your dfm server. https://communities.netapp.com/community/interfaces_and_tools/developer/zedi Regards adai
... View more
Hi Martin, You will have to make an API call against your dfm server. Use the NMSDK it has to api, documentation and also the tool to run it. Once the API is run an entry is made in the embedded sybase database. Regards adai
... View more
Hi Martin, As Beverly pointed out there are two ways of creating a custom events. One way is to create it using the NMSDK api eventclass-add-custom. Use the script plugin xml definition to create the custom event. The above two links should help you in creating the custom events, Pls note you will have to write the logic and trigger the custom event using dfm event generate cli. They wont get generated automatically. Regards adai
... View more
Hi Craig, I think the reason you aren't getting qtree full alerts is due to the fact that your quota entry has not limits. By default the values for qtree full and nearly full is 90 and 80 % respectively. Also qtree space utilization alerts are only triggered if quota are set on the qtree. Though in your case quotas are since there are not limits applied in the quota file, the 80 and 90% couldn't be applied. I think that's the reason you don't get your alert. Can you set a value and run dfm host discover to see if alerts are being generated for qtree ? Also as you said, in case of qtree quota its the amount of space written and doesn't include dedupe space savings. So its the effective space used and not the actual( which includes dedupe space savings) Regards adai
... View more
Hi, OCUM 5.1 doesnt report an aggr as Flashpool even if it is. The next version OCUM 5.2 which is currently available in BETA OnCommand Unified Manager 5.2 Beta Program reports an aggrs attribute of flashpool. But even that doesnt collect any stats related to FlashPools. BTW what stats are you looking to you should probably try Performance advisor, if you know the counter name. PA allows you to configure, the collection frequency, retention time and what counters to collect. Using PA many ppl today monitor the PAM cards stats. Regards adai
... View more
Hi Guys, See if these reports in PM helps you. [root@dfm-rhel~]# dfm report | grep -i transfer dp-transfer-backup-individual DP Transfer Backup, Individual dp-transfer-backup-daily DP Transfer Backup, Daily dp-transfer-backup-weekly DP Transfer Backup, Weekly dp-transfer-backup-monthly DP Transfer Backup, Monthly dp-transfer-backup-quarterly DP Transfer Backup, Quarterly dp-transfer-backup-yearly DP Transfer Backup, Yearly dp-transfer-mirror-individual DP Transfer Mirror, Individual dp-transfer-mirror-daily DP Transfer Mirror, Daily dp-transfer-mirror-weekly DP Transfer Mirror, Weekly dp-transfer-mirror-monthly DP Transfer Mirror, Monthly dp-transfer-mirror-quarterly DP Transfer Mirror, Quarterly dp-transfer-mirror-yearly DP Transfer Mirror, Yearly dp-transfer-dataset-daily DP Transfer Dataset, Daily dp-transfer-dataset-weekly DP Transfer Dataset, Weekly dp-transfer-dataset-monthly DP Transfer Dataset, Monthly dp-transfer-dataset-quarterly DP Transfer Dataset, Quarterly dp-transfer-dataset-yearly DP Transfer Dataset, Yearly [root@dfm-rhel ~]# These are the fields of the report [root@dfm-rhel ~]# dfm report view dp-transfer-backup-daily help DP Transfer Backup, Daily Report (dp-transfer-backup-daily) DP Transfer Backup, Daily Columns: objId Relationship ID srcName Source Name dstName Destination Name timestamp Timestamp transKBs Bytes Transferred transRate Transfer Rate /sec transferCount Transfers failureCount Transfer Failures dataGrowthKBs Data Growth Default sort order is +objId. [root@dfm-rhel ~]# BTW in case of PM these are rolled up in the protection status and conformance status instead of success or failure. Regards adai
... View more
Hi Mark, Yes, because they are still competing actions that are possible by VSC and HP. I am filing a bug for the same to get it updated. Regards adai
... View more
Hi Mark, I have already passed on your feedback to the Engg teams. Here is a response that I got back from them. I an not saying your expectation is wrong. I am filing a RFE for the same so it gets addressed in a future release. This was intended as the preferred behavior when the DFM could not find once discovered objects on the filer anymore. Our Mantra has always been not to destroy (data OR relationships). There are many reasons why an object such as a qtree or volume would seem as deleted to DFM (delayed monitoring, busy filers, volumes going offline temporarily and others). We DID NOT want to assume that the object is deleted and destroy it along with its existing relationship when it very well could have been alive and just not visible to DFM (we have had angry customers in the past who suddenly found their objects destroyed). The assumption for the existing behavior was that if the user wants to delete objects and their relationships, they will do it directly from the filer and they would give them much better control. Even when the primary members are removed from the dataset, we make the relationship external and do not destroy it. Regards adai
... View more
Hi All, Firstly, running dfm database ( sybase) on a NAS path is not is not supported. You can move your created .ndb files to NFS export path or CIFS share but no the actual monitordb.db and monitordb.log. Use the dfm datastore setup cli to move db from one location to another. It takes care of moving all required things. Regards adai
... View more
Hi Mark, To answer your question to my knowledge its not fixed in OCUM 5.2. The current way to handle is not to use dynamic referencing if you keep deleting qtrees. I strongly recommend you raise a case with this behavior so that it could be fixed in an upcoming release. Regards adai
... View more
Hi Markus, If you enable the generation of core files for all users, the error message of rcore infinity should go away. The below link should help you do the same. http://www.akadia.com/services/ora_enable_core.html
... View more