Yes. Starting version OCUM 5.0 and later the passwords are stored encrypted using encryption keys that are generated during install. Regards adai
... View more
Hi Ben, As kevin said, Host Package and not Core Package that requires .NetFramework. Also can you give me more details on what is that you plan on using Host Package for ? Also keep in mind Host Package requires a license to be bought for backup and restore of virtual machine though there is no key required during install. Its licensed based on controllers. Regards adai
... View more
Hi, Can you give me more information on the following ? what is the version of OCUM that you are using ? What is the OS of which OCUM is installed ? What is the version of OS from which the browser is launched and its OS language like english, german etc ? What is the type and version of browser ? Can you get the output of the following cli ? dfm options list | grep -i http if your OCUM is running on Linux dfm options list | findstr /i http if your OCUM is running on Windows. Regards adai
... View more
Hi Stephen, You are hitting a known issue, this behaviour is expected in all versions other than 5.2RC1. This is fixed in version 5.2RC1 and later. Pls find below the Bug report for the same. http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=608634 Regards adai
... View more
Hi Arun, I am not clear on your question. The schema for DFM is same irrespective of 7Mode or clustermode. As a volume is a volume in both 7 and C mode. But there are new consturcts like vServer and lif etc which are not exposed via these view at least for CMode. Regards adai
... View more
Hi Chris, The cli in the backend calls the API only. Run the cli and do a tail of the audit.log file to see how the apis are used. Regards adai
... View more
Hi Saran, What is the version and mode of OCUM that you are running ? If its 7Mode you can customize the data collection. To do the same pls refer section 14 "Data Collection " page 69 of the TR4090 titled Performance Advisor Features and Diagnosis:OnCommand Unified Manager 5.0/5.1(7-Mode) http://media.netapp.com/documents/tr-4090.pdf If its clustered Ontap you cannot customize the data collection interval, retention or coutners. Regards adai
... View more
Hi Peter, With OCUM 5.2RC1 and later, you may not need this, as there is a inbuilt cli to purge the events. [root@vmlnx ~]# dfm purge help NAME purge -- purge events and data protection job events from the Database. SYNOPSIS dfm purge event [-S <Severity>] [-s] <purge-interval> dfm purge dpevent [-J <jobId>] [-s] <purge-interval> DESCRIPTION The purge command purge the events and data protection job events from database -S <Severity> Purge the events whose severity is less than or equal to <Severity>. -s Show the details about the events without cleaning up them fom DB. -J <JobId> Purge the dpevents whose JobId is less than or equal to <JobId>. [root@vmlnx ~]# [root@vmlnx ~]# dfm purge event help NAME event -- purge events that are older than interval SYNOPSIS dfm purge event [-S <Severity>] [-s] <purge-interval> DESCRIPTION This command is used to purge history events that are older than purge interval. -S <Severity> Purge only the events which has severity equal or less than 'Severity' -s Shows the details of the events, without cleaning up them from DB [root@vmlnx~]# Regards adai
... View more
Hi Ben, To really understand whats going on, can you give us an output of dfm diag and upload to same to understand how this server is being used ? Also can you check if multiple virutal cpus are causing any perofrmance issues? http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1005362 You said, 5 DFM server are they all talking to the same set of controllers ? If so you may have to tone down some of the monitoring intervals. Regards adai
... View more
Hi Ben, We are sorry for the inconvenience, but AFAIK OCUM installation doesn't require any specific .Net Package. Is it a standard Windows 2008 R2 ? Can you give us the output of system info ? BTW pls install 5.0.2P2 or 5.2 RC the latest GA and RC releases respectively and not the old FCS 5.0 version. Regards adai
... View more
Hi Markus, What is the version of OCUM that you are running ? Run the dfpm relationship list command and look for redundant relationships and use the DOT cli to do a snapvault stop for the appropriated qtrees. The BPG has a topic which in details explains how to retire(purge) a relationship out of a dataset. "5.12 GRACEFULLY RETIRING RELATIONSHIPS FROM ONCOMMAND PROTECTION TOOL" https://kb.netapp.com/support/index?page=content&id=1013426 Regards adai
... View more
Hi Chris, The diskRole field gives the details of the disk role, like data, parity, spare etc. Unfortunately, an unowned disk is never discovered by OCUM or for that matter system manager as well. Only owned disk is reported in case of OCUM and SM as well and aggr can only be created using owned disk. Regards adai
... View more
Hi Satish, Thanks for the details. Let me give you some overview on how you should do this upgrade. You are currently running version 3.7.1 which is close to 5 years old. After that we have made the following version, 3.8 4.0 5.0 5.1 5.2 The current release is 5.2RC1 which is soon going to become GA. I am writing this in the assumption that you will upgrade to the latest version 5.2RC1. The current upgrade plan would be to go like this 3.7.1 > 4.0.2D12 > 5.2RC1 Upgrade from 3.7.1 to 5.2RC1 is directly supported and seamless. But you should know and consider the following. Version 3.7.1 was running with sybase version 9.0.2.3396 In version 4.0 there are 2 major changes Sybase version 10.0.1.3831. Also in 4.0 there is Major change in the way Performance Adviser data was written/stored So since you are upgrading, all these PA flat files needs to be rewritten in the new format which would take considerable amount of time depending upon the amount of PA data you have.( We have seen PA flats files upgrade taking up to 26 hours when the data was close to 250GB). The rule of thumb is that its takes approximately 1 hour for each GB of PA data. Also it needs atleast 40% free space of the perf dir current size, inorder to do this upgrade with out any issue. In 5.0 there is introduction of edition of dfm called express and standard. In 5.1 there is a split of mode in the dfm called cluster-mode and 7-mode. In 5.2 there is purge of dfm database for the following type of data which may again take considerable amount of time Deletion of mark-deleted objects and its history Purge of data protection job progress events older than the value specified in dfbm options list jobpurgeolderthan value. By default the value is 90days. Purge of events older than the value specified in dfm options list eventpurgeinterval. By default the value is 180days Also before upgrading pls find a 64bit server with supported OS by referring the IMT. The current memory footprint is not sufficient, you would atleast need 8GB of memory and my recommendation would be 16GB. Some of your existing monitoring intervals are too frequent than the default ones. Pls reset them back to default. List below are the same. ccTimestamp 8 hours 4 hours 13 Jun 08:03 cfTimestamp 1 minute 5 minutes 13 Jun 16:03 Normal 13 Jun 16:02 diskTimestamp 1.25 hours 4 hours 13 Jun 15:36 Normal 13 Jun 14:48 ifTimestamp 1 minute 15 minutes 13 Jun 16:03 Normal 13 Jun 16:02 licenseTimestamp 8 hours 4 hours 13 Jun 13:31 Normal 13 Jun 08:03 qtreeTimestamp 30 minutes 8 hours 13 Jun 15:33 userQuotaTimestamp 15 minutes 1 day 13 Jun 16:03 Normal 13 Jun 15:48 statusTimestamp 1 minute 10 minutes 13 Jun 16:03 Normal 13 Jun 16:02 sysInfoTimestamp 30 minutes 1 hour 13 Jun 15:57 Normal 13 Jun 15:33 svTimestamp 30 minutes 30 minutes 13 Jun 15:33 svMonTimestamp 8 hours 8 hours 13 Jun 08:03 xmlQtreeTimestamp 30 minutes 8 hours 13 Jun 16:03 Normal 13 Jun 15:33 vFilerTimestamp 1 minute 1 hour 13 Jun 16:03 Normal 13 Jun 16:02 Pls take a backup using the cli dfm backup create before you upgrade and keep it safe. I would also recommend you to do a dry to know exactly how much time the entire upgrade would take before you go for the live one. Stand up a new 64bit server, install 4.0.2D12 Stop all service and start only sql service Restore the backup taken from 3.7.1 using dfm backup restore cli. Time the entire upgrade process. This is where the PA flat files will be rewritten in the new format Create another backup in 4.0.2D12 after completion of step 3 Now Download 5.1 and upgrade on the same server, here you will make the following choices, Edition as Standard Mode as 7-Mode After completion of step 5 install the dfmpurge https://communities.netapp.com/videos/3134 tool and run it in report mode to estimate how much time the upgrade to 5.2 will take as in 5.2 upgrade the following is involved ( its recommended to reduce the value of jobspurge and events purge before upgrade to 5.2 as maximum stale data is cleaned and db is de-fragmented) Database validation, if its an upgrade,(not in case of restore as database backups are by default verified and validated. ) it takes close to 10mins for every GB of the database size. Purge of Mark-Delete objects Purge of DataProtection Job history Purge of all events older than eventpurge interval. Based on the dry run estimated plan a downtime and upgrade as per the plan of 3.7.1 to 4.0.2D12 to 5.2RC1. Regards adai
... View more
Hi I suggest you post this on DATA ONTAP communities instead of here . This community is dedicated for Operations Manager aka DFM aka OCUM. Regards adai
... View more
Hi Neil, Happy that its really helping you and find it useful. Let us know if you have any ideas or improvements needed in the script. Regards adai
... View more
Hi Muhammad & Niel, The output would be stored in the following location. If the its a default installation. C:\Program Files\NetApp\DataFabric Manager\DFM\script-plugins\PM_Extractor\ScriptOutput.csv To generalise, in both windows or linux, you can find the output in the following location. <installdir>/DFM/script-plugins\PM_Extractor\ScriptOutput.csv Regards adai
... View more
Hi Marc, I don't see any correlations between this failure and upgrade. Can you also paste the output of the job details cli for this job id ? dfpm job detail 24050 The error is basically coming from the storage systems which Protection manager is relying back. Regards adai
... View more
Isnt for such reasons we have Operation Manager ? Given that its a also comes along with your controller, what is the real reason of not using Ops-Mgr to get this information. Rather trying to get it in a very complicated way. Is there a specific reason which I am missing ? Regards adai
... View more
Hi Steve, The new BIRT report in the OnCommand Console gives what you are looking for. One thing to note is that, it just the filer/vfiler under the column name Storage Server. The old reporting infrastructure does not allow to report both filer and vfiler volume in one report. Attached is the sample output for your reference. Also pls take a look at this course on how to create and customize BIRT reports. Working with Reports in OnCommand Unified Manager Regards adai
... View more
Hi Have you added your Resource pool to your Storage Service ? Both to the primary node and Mirror node ? As per the error, can you make sure your NDMP credentials are set for both the source and destination controllers ? You can do this by running the Diagnose Wizard from NMC>ManageData>Hosts>Storage Systems BTW what is the version of OCUM server are you running ? Is this a normal dataset or application dataset or virtual dataset ? Regards adai
... View more
Hi Richard, Is there any Igroup or WWPN mapping done for luns in these volumes ? What is the exact error message you get when you run this migration wizard ? I recommend you to open a case with netapp, if you still face this problem. Regards adai
... View more