Nothing to that order has changed in OnCommand 5.0. What you can do is select the object type you would like to ignore and click on delete. Like a volume, qtree or aggr etc. If its the entire filer you can delete the filer and stop monitoring it so no events related to the filer or its child objects are generated. You can either use the Web UI or dfm volume delete <volume name or id> similarly for filer, vfiler, aggr, qtree and lun. By doing this you dont loose all the history about these object that DFM has accumulated over time. You can get them by re-adding using dfm volume add <volume name or id> The same applies to filer, vfiler, aggr, qtree and lun. Hope this helps. Regards adai
... View more
Sounds bizarre, generally dfm host diag fixes such problem. But not sure why NDMP alone takes time. Can you do a diagnose in the NMC for such host and let us know ?
... View more
This is a known issue.Its because even though we deleted it from the filer, the dfm db which has it. You can confirm this by the following cli. dfm volume list -a | grep -i <volume name> If you can remove this from the db you can reuse the same name. Also there is bug for the same pls add your customer case to the same bug # 448480 Regards adai
... View more
What kind of performance degradation is the customer seeing ? Is the back taking long time ? Can you get the output of dfm diag -a ? Can you explain more on the degradation ? Regards adai
... View more
I have always wondered for one place from where I can access all TR related to DFM.Finally ended-up creating a doc for my self, thought it would be useful to others too, that's the reason I am posting it on communities. This TR describe the Best Practices that need to be followed while implementing,architect and use DFM and its suite of products. The updated one for OnCommand 5.0 is available as KB article in the link below. How to deploy OnCommand Unified Manager – Best Practices Guide The TR below is as of DFM 4.0 TR 3710- Operations Manager, Provisioning Manager,and Protection Manager Best Practices Guide ============================================================================================================================= This TR provides the necessary information to allow Operations Manager, Provisioning Manager, and Protection Manager administrators to choose the correct system for hosting the DataFabric® Manager server. And also to split them based on functionality while reaching sizing limitation. The updated one for OnCommand Unified Manger 5.x is available as KB article in the link below OnCommand 5.0 Sizing Guide: How to select the correct system for hosting the DataFabric Manager server TR 3440 - Operations Manager and Provisioning and Protection Manager Sizing Guide ================================================================================================================================= To efficiently manage storage capacity,storage administrators require tools to view current utilization of the resources, change in utilization over a period of time, trend and forecast utilization in future, charge users for the capacity utilized and alert administrators to identify and resolve imminent problems. This document describes the various tools provided by NetApp Operations Manager for capacity-management. Storage Capacity Management using OnCommand Operations Manager ================================================================================================================================= This TR gives a step by step details on how to configure High Availability for DFM on Windows using Microsoft Cluster Server (MSCS) and on Linux using Veritas Cluster Server (VCS). TR 3767-High-Availability Support for DataFabric Manager Server ================================================================================================================================= This TR describe, how users can access DFM database view and export both DFM and Performance Adviser data. Using BI tools like Crystal Report, BIRT, Cognos and any JDBC or ODBC TR 3690 - Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export ================================================================================================================================= The Performance Adviser document referenced in this article helps in: Understanding how to use Performance Advisor to monitor and generate alerts when there is performance degradation Identifying key counters that must be monitored and the thresholds that must be set initially Once the user gets familiar with Performance Adviser and its capabilities in troubleshooting performance issues, the threshold values can be fine tuned to suit the requirements. However, a better way of finding the correct threshold values for a workload is to use the base-lining feature in Performance Adviser. A methodology for doing this is also discussed in the document. Performance Advisor Default Performance Thresholds for Application-Specific Workloads ================================================================================================================================= This TR describes how to optimize the storage space used to store performance information. TR 3751- Managing Performance Advisor Data ================================================================================================================================= This document outlines current challenges with large deployments of DataFabric Manager Server, and to propose possible alternatives to single instances of DFM that become unresponsive or crash. DFM deployments can consist of one to many parts. Those being Operations Manager, Performance Adviser, Provisioning Manager and Protection Manager. Distributed DataFabric Manager Server Strategy ================================================================================================================================= This TR explains how to set up the Disaster Recovery Support for DataFabric® Manager (DFM) Data feature without using Protection Manager. TR 3655 - Disaster Recovery Support for DataFabric Manager Data Using SnapDrive ================================================================================================================================= This TR will help the storage administrators to follow a simple and efficient approach for storage provisioning and managing provisioned storage in SAN deployments for NetApp® storage using NetApp Provisioning Manager TR 3729 - Simplified SAN Provisioning and Improved Space Utilization Using NetApp Provisioning Manager ================================================================================================================================= This document, we will show you how to provision storage using policies through our UI and then show the same operations performed using our API. Also, we will address how to automate provisioning of efficient multi-tenanted storage via an Orchestration tool, which is one of the critical needs for cloud enablement. Storage Provisioning Integration with Orchestration Software.docx ================================================================================================================================= Performance Advisor Features and Diagnosis:OnCommand Unified Manager 5.0/ 5.1 Operating in 7-Mode This Technical Report details more into Performance Advisor Features and Diagnosis using OnCommand Unified Manager 5.0 which is applicable to OnCommand Unified Manager 5.1 operating in 7-Mode as well. Since there are no changes to Performance Advisor functionality in OC UM 5.1 for 7-Mode Performance Advisor provides an easy-to-use interface and the ability to set performance thresholds and alerts on the key performance metrics. This guide documents regular and routine storage performance monitoring and troubleshooting methodologies using Performance Advisor that can be used to track performance changes in a storage system and to take corrective actions before they affect end users. TR 4090 - Performance Advisor Features and Diagnosis:OnCommand Unified Manager 5.0/ 5.1 Operating in 7-Mode ================================================================================================================================= This technical report will briefly explain the integration approaches for monitoring NetApp® storage using HP OpenView software. It will primarily address the discovery and monitoring features provided by HP OpenView and how to integrate them with NetApp storage.Customers who have DFM server/OnCommand can enable trap forwarding and forward SNMP traps of their storage system to HP OpenView. TR 3688 - NetApp Storage Monitoring Using HP OpenView Regards adai Message was edited by: Adaikkappan Arumugam Updated with Distributed DataFabric® Manager Server Strategy Message was edited by: Adaikkappan Arumugam Included the Service Catalog and Cloud deployment TR. Message was edited by: Adaikkappan Arumugam added "Performance Advisor Default Performance Thresholds” for application-specific workloads, which is published as a KB Message was edited by: Adaikkappan Arumugam added, capacity management document in operation manger.
... View more
Hi, You are hitting the following bug. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=508580 The public report has workaround on how to fix this. Regards adai
... View more
That's not a requirement for normal dataset in protection Manager. The specific case of SME and SQL are because non-disruptive lun restore works only for qtrees, and not volumes. When a entire volume is added to a dataset in PM its replicated entire qtress inside the volume including the data inside the volume and not in any qtree. So if you have a volume vol1 with 1 qtree qt1, then PM will create 2 qtree snapvault or snapmirror relationship as follows. /vol/vol1/- /vol/vol1/qt1 Regards adai
... View more
Hi Earls, Is there a way to set a default login/password for controllers in Operations Manager? The short answer is No. This is annoying in shops where the customer has tens or hundreds of controllers that are being added to a new Operations Manager server. It appears you have to specify the login/password for each controller individually. But you dont have to specify it individually.Have you tried the following Storage Systems with empty credentials on the DataFabric Manager server- You can select more than one controller at a time and set the credentials. Similarly this is availabe for vfilers too. Below is the link to the same. http://<dfm server name/IP>:<8080or 8443>/dfm/edit/passwords?type=Filer Replace the angle braces with your dfm server information. Regards adai
... View more
Hi Marlon, You must already be knowing this,cpu_busy is collected in PA by default every 1m.The threshold interval prevent from getting alerts on spike, instead if the value stays there for the interval specified only then generate an event. So as per your threshold, alert when cpu_busy falls below 10% and stays there for 1hour.(ie, for 60 sample as per the default collection interval).So what happens is as soon as the value falls below, 10% for the first time, a counter is started to see if the value stays equal or below in your case 10% for the threshold interval specified.If in between even 1 sample value falls above 10% then this counter is stopped. Again, when the value crosses 10% the counter starts and counting for 60m, if the value stays below 10% then an event is generated. This sets an event status on the object on which threshold is set.In this case its the cpu. So until the status of this object changes, (i.e. from error to normal when the cpu_busy fall above 10%) a new event will not be generated for cpu_busy every 60m.It will only happen when a normal event is raised, which modifies the event status of the object, and again if it falls below 10% then one more event will be generated else not. If your need is to see if the filer is idle for 8hours, its better to set the threshold interval to 8h instead of 60m. Another thing what you can do is the following report to see how long was the threshold was violated. [root@oncommand ~]# dfm report view storage-system-performance-summary Object ID Type Status Storage System Model CPU Busy (%) Total Ops/Sec Net Throughput (MB/Sec) Disk Throughput (KB/Sec) Perf Threshold Violation Count Perf Threshold Violation Period (Sec) --------- ------------------------ -------- --------------------- --------- ------------ ------------- ----------------------- ------------------------ ------------------------------ ------------------------------------- 91 Controller Error fas-sim-1.localdomain Simulator 2.47 0.00 0.00 85.73 1 900 90 Controller Error fas-sim-2.localdomain Simulator 1.27 0.00 0.00 66.03 92 Controller Critical fas-sim-3.localdomain Simulator [root@oncommand ~]# Also you can set repeat notification, which will send you details, where the event id would be same but the values of cpu_busy condition might be different. Say at first time when repeat notificatin was sent it was 8% and on the next its 6% this value is reflected in the condition, but the event id and the source of the event remains the same. Hope this helps. Regards adai
... View more
HI Abhishek, Marlons is using Performance Advisor, and you are talking about operation manger cpu monitoring which is by default 5mins. Regards adai
... View more
Hi Daron, Can you have more than one DFM/Ops Mgr Servers manage/have access to a single storage system? I have one DFM 3.8 Server and one DFM 4.0.2 Server installed and wanting both to manage a single storage system if possible. If its just monitoring/alerting/performance management then yes.But you load the storage system twice by monitoring.Also you will get twice the number of alerts like volume full, quota full etc. BTW why do you want to do it ? Is there a reason behind it ? If its active management like, configuration management, protection manger,provisioning manger then I would not. Can you input multiple servers in the Ops Mgr Access field in FilerView of the single storage system? I doubt it allows. I thought its always one. Regards adai
... View more
Hi Francois, You are hitting the following bug, pls raise a support case with NGS and get things fixed, Bug Id 489801. Or alternativley you can do the following to fix the issue your self.The details are given in the following link. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=508580 Pasting it for convinence of all. 1. Set the deduplication limit for FAS3270 with ONTAP 8.0.1 in the following way. $dfpm reslimit create "8.0.1" "FAS3270" Created new resource limit (215). 2. Set Maximum deduplication size of FAS3270 and ONTAP version 8.0.1 (in GB) $dfpm reslimit set 215 maxDedupeSizeInGB=16384 3. Verify the deduplication limit set for FAS3270 with ONTAP 8.0.1 $dfpm reslimit get 215 Id 215 ONTAP Version 8.0.1 Product Model FAS3270 Availability None Maximum number of FlexVols per storage controller Maximum CPU utilization threshold of storage controller Maximum Disk utilization threshold of an aggregate Maximum Deduplication size of a storage system model and ONTAP version (in GB) 16384 4. Done. You are ready to go. Regards adai Message was edited by: Adaikkappan Arumugam
... View more
That true, during importing of external relationship we only check for the source and destination volume languages to be same, and nothing else with respect to the size. That's the reason you are able to do so. The problem with this approach is that when the primary volume are full, your backup volume will have no space to hold the data itself. Also they not have space to accommodate for retention specification. Regards adai
... View more
Hi Ted, Pls find my responses inline. We are working on upgrading our DFM infrastructure to v4.0.2. We want to ensure extremely fast response time for DFM and we would like to integrate into our existing reporting systems. Have some questions: 1. It appears DFM 4.0.2 is still 32-bit. Is there any advantage to giving the system more than 4GB of RAM if it is only a 32-bit database engine? It true that the DFM v4.0.2 is a 32 bit application, but it has 6 process, each of which consumes upto 1.8G, [root@lnx ~]# dfm service list sql: started http: started eventd: started monitor: started scheduler: started server: started watchdog: started [root@lnx ~]# Its generally the sql, server, monitor, and eventd tend to consume upto 1.8G each. So have at least 12 GB for dfm and another 4 G for operating system. Also the next release of DFM called OnCommand 5.0 is a pure 64 bit application and it scales along with the provided hardware.(both in terms of RAM,Cores&CPU). 2. What is the maximum value you can set for the dfm dbCacheSize option that would help with performance? The manual recommends 1024, but if you have 16GB of RAM, is there any performance value in setting this higher. When you have 16 GB you can leave it at default, as we take half of the 1.8G for dbCacheSize, this recommendation hold good when you have less RAM.Either set it at 1G or leave it at default. 3. We are running on RHEL 5.6 64-bit. Are there any kernel or cache settings that can help increase DFM performance? If you are planning to run Protection/Provisioning Manager, pls increase the number for semaphore from 128 to 1024. On Red Hat Linux, the defaultlimit of 128 semaphore arrays can be increased to 1024 by adding line below to /etc/sysctl.conf: kernel.sem=250 32000 32 1024 Where 250 max semaphores per array 32000 maxsemaphores system wide 32 max number of operations per semop call 1024 value for the number of semaphore arrays ---this what needs to be modified. 4. How can we connect to the DFM database to retrieve the information into our reporting systems? Is there an ODBC connector available? Is the data structure published? Yes,Database schema is documented under the following link in your installation. Open Operation Manager WebUI->ControlCenter->Help->General Help->Database Schema. Below is the link to the doc on how to access the db views using third party tools. http://media.netapp.com/documents/tr-3690.pdf 5. Can we use snapshots to backup and replicate the database? What command can we run to quiesce the database for the snap? Yes,DFM has its support for DR, and all this can be done. Below is the link for the same. http://now.netapp.com/NOW/knowledge/docs/DFM_win/rel402/html/software/opsmgr/GUID-6D27D5B4-A75C-419F-9DC5-792C47B93023.html Regards adai Message was edited by: Adaikkappan Arumugam
... View more
Discovery of snapshots, like new, and deletion of discovered snapshot is done by snapshot monitor. Any space related info is done by diskfreespace monitor. Discover of Aggr, volume, qtree are done by fs mon. vfiler by vfiler mon lun by lun mon. And disk utilisation of all this is done by df mon and qtree utilisation by quota mon.(if quotas are set) Regards adai
... View more
PM by default checks for 10GB for free space in the destination aggr, for each source member in the OSSV dataset. So if you have 3 directory or drive(incase of windows) as member of a dataset like, /opt /root /usr, then PM checks for 30 GB for free space, once it find that much space,it creates a destination volume to the size of the aggr, with guarantee of the volume set to none.If you have secondary provisioning policy in place, with dedupe enabled, then the size of the volume would be the max size supported for a dedupe volume in respective ONTAP version and FAS/N-series Model. The size of the secondary volume can be controlled by setting the following opitons. dfm option set pmAutomaticSecondaryVolMaxSizeMb=<value in MB>. This will limit the size of all secondary volumes provisioned by PM. What you may do it set this option only during the OSSV secondary volume creations and reset it back afterwards as follows. dfm option set pmAutomaticSecondaryVolMaxSizeMb=0. Regards adai
... View more
Yes, you are correct, sizing is not done with many snapmanager datasets. BTW how many snapmangers do you have talking to your PM ? Regards adai
... View more