Hi Magnus, Looks like you are hitting a known issue of bug 379483. Pls raise a case with NetApp Global support against the bug and get this solved. Regards adai
... View more
Hi Davis, The value show in OnCommand for cpu utilization is the value return by snmp which includes all processors. If you would like to have individual processors cpu utilization, pls take a look at the Performance Advisor which gives the breakdown at individual processor level. Regards adai
... View more
What is the version of ONTAP against which these custom view is running ? Is the controller licensed for FCP and iSCSI ? Can you get the output of the following cli to verify the same. dfm report view storage-systems-protocols Regards adai
... View more
Hi, Pls let us know next time you face this issue. There is also an options in OnCommand to have the audit.log forever with out rotating them. [root@ ~]# dfm options list |grep -i audit auditLogEnabled Enabled auditLogForever No [root@ ~]# Change this option auditLogForever=YES. By default its No. Regards adai
... View more
Hi Gopinath, As you rightly mentioned, starting ONTAP 8.1 there is no FilerView support. The replacement tool for the same is OnCommand System Manager, But its an OFF box solutions and requires to be installed on all desktop. Here is a tool written by one of our Developer Conference particpant to manage qtree quota. See if this helps. Qtree Quota Manager Regards adai
... View more
Hi Magnus, Can you share the output of dfm diag so that i can see how many jobs per day you are running and other configuration of your DFM server to see if it was running out of resources. Regards adai
... View more
Hi Magnus, Unfortunately there is no cli or ui to cancel a job that is in queue. Did you try If you can upgrade to 5.0.2 which is the latest GA release with enhanced scalability due to 64bit architecture. There is a way in version 5.0/5.0.2 to purge all jobs beyond a timestamp. See if that helps. BTW how did you land in this situation ? [root@~]# dfpm job purge help NAME purge -- remove old data protection and provisioning jobs from the database SYNOPSIS dfpm job purge [ -a ] <time-period>} DESCRIPTION Purge old successful jobs whose completion time is older than (now - time-period). time-period is specified in a flexible time format. Only successful jobs are purged unless -a option is specified. Examples of time period: 4m, "2.5 hours", "15 secs", etc. [root@ ~]# Regards adai
... View more
Hi Calvin, We haven't heard similar problems before. In order to find the actual root cause this, we will have to enable some logging to find out whats happening in the provisioning job and time each step of it. All of this is not possible via communities, can you pls open a case with NetApp support who can do all of this and find the root cause and then provide a fix. Regards adai
... View more
Hi Paul, The KB is now merged into the BPG, which is available under the following location. How to deploy OnCommand Unified Manager – Best Practices Guide Regards adai
... View more
Hi Chris, I had our Engg team look into it. They did a service restart and now the data shows up. Can you check and let us know if you still face this issue ? Regards adai
... View more
SnapDrive support is for NetApp Storage and OSSV is for Non-NetApp Storage. Also both are totally different products. In case of SnapDrive, still netapp snapshot is taken irrespecitve of ext3/4, thats not the case with OSSV. Regards adai
... View more
Hi Calvin, What is the version of ONTAP against which this provisioning job is running. The bottle neck in creating the volume can be one of the two location or both. OnCommand Unified Manager/Provisioning Manager Could be in Ontap. Can you do the same set of operation in the filer and let us know if you still face the same problem. Also was there any specific opertaion that was going on in the filer other than normal data access like, mirror update, clone split, snapshot creation etc ? Are you facing this problem in all your controllers or is it specific to only one controller ? Also are all these provisioning jobs running concurrently on the same controller ? Regards adai
... View more
Hi Klaus, Pls upgrade to 5.0.2, as its the current GA release and fixes a important security vulnerability that existed in 5.0 and 5.0.1 QTREE ID from a qtree on the filer We tried to get the "qtree-id" with the API-Element: "qtree-list-info-iter-next - qtree-id". It looks like the this qtree-id is connected to the internal Operations Manager ID of this qtree, but this has nothing to do with the qtree-id shown in "quota report" on the filer. We would need the ID on the Filer for mapping on Quota-Alerts (SNMP-Traps from filer, contains qtree-id, not qtree name) for our External Trapnotify - script. That's right, what you get from qtree-list-info-iter is the internal dfm sybase database unique identifier. There is no way in DFM to get the id that you get in a quota report. As you said, you can either user the api-proxy or run the dfm run cmd and get the same. Regards adai
... View more
Hi Maico, Max fan-in ratio is only applicable to SV and QSM relationship which are qtree based. This options is not applicable to VSM relationships as they are volume based. What kind of relationship are you doing ? Is it VSM or QSM or SV ? Here is a brief write up on Fan-In ratio. Protection Manager supports volume fan-in to allow multiple source volumes to go to a single secondary volume. If fan-in is desired, the NetApp best practice is to set the fan-in ratio to four. Fan-In is only applicable to QSM and SV relationship and not for VSM or OSSV relationships Pros of Fan-In value higher than 1. Less Secondary Controllers.As you wont hit the controller limit of 500 volumes. Take an example of a 100 volumes each from 10 different primary Storage System need to be backed up to a single secondary Storage System. A secondary storage system would require 1000 volumes, yet the maximum supported volume count is 500 on FAS 3xxx or 6xxx series, with fewer on entry levels models. However, if capacity is the limiting factor rather than volumes, it may be preferable to keep fan-in at 1:1. More Dedupe Savings. Dedupe operates in the scope of a volume. Greater storage efficiency may be obtained by increasing the fan-in ratio when primary volumes share common data. If there is plan to move towards ONTAP 8.0 7-Mode to provision 64 bit aggregates on the secondary controllers. If that’s the case they can easily have more than 25 secondary volumes on their 64-bit aggregates which could be a potential sweet spot for the Fan-in use case. Reduces the number of dedupe jobs, but it might increase the duration of these jobs. Simplifies the snapshot management, Reduces the number of Snapshots being created and deleted. Cons of Fan-In values higher than 1 Long running transfers. When 4 primary volumes are backed up to 1 secondary volume and one of the four takes a longer time to complete the update, the creation of snapshot on the secondary volume is delayed as it can be taken only when all incoming relationships to the volume are completed The NDMP session and the replication stream on the Storage System are held open by long running transfers. There are chances of missing SLA with frequent updates such as every couple of hours Reduces the flexibility of migrating secondary volumes for space management, as it becomes more difficult to find aggregates with sufficient free space. Backing up multiple primary volumes to a single secondary volume increases the risk of hitting dedupe volume size limits. When we have dpDynamicSecondarySizing option enabled, Protection Manager won’t be able to grow the secondary volume beyond the dedupe-platform limit. Instead, it would provide a warning message in the job log and continue with the transfer. Hence, if the secondary volume doesn’t have enough space, then the transfer will fail. Regards adai
... View more
Hi Morgan, On upgrade only the db schema is changed to the appropriate version, but no data in the db tables are changed. Is your host configured for DHCP ? as OnCommand doesnt use the dns names instead always tries to reach the controllers uisng the ip address with which it was added. The same is stored in the dfm host get. Can you confirm that there were no network changes on the controller side that had effect on the IP being changed ? Regards adai
... View more
The question of support comes when you have problem. Support center/Engg will not provide fixes/triage for any issue on unsupported configurations. Regards adai
... View more
Hi Chris, Is this browser being launched from windows 7? If so you will hit this. Browsers from Windows 7 are not supported.Also can you check if you are running a supported version of browser like IE 8 or Firefox 3.x. The supported version pops up when you launch it in an unsupported browser version. Regards adai
... View more
Hi Mark, Unfortunately, there is no way today in OC, to include the condition as a column in the Events reporting, which is fair ask but there is a work around. If you are running OC 5.0 or later, you can access the events database view and do exporting and other stuff. For more details on how to access the dfm db views refer to the following TR. TR 3690 - Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export To access some of the important doc/whitepapers/TR related to OC UM pls refer the following post. OnCommand(DFM) and its related Technical Reports But again there you may not be able to get the condition directly. Below is the actual event. Which I am taking as an example [root@vmlnx ~]# dfm report view events-warning | grep -i 1743 Error 1743 Clock Skewed 18 Jul 14:03 130 f3240-208-145 The event condition is as below [root@vmlnx ~]# dfm event detail 1743 | grep -i condition eventCondition Clock on host f3240-208-145(130) is behind management station by 4368 seconds [root@vmlnx~]# This is how the condition is stored in the db using arguments which the eventView exposes as well. [root@vmlnx208-161 ~]# "SELECT eventArguments FROM eventView where eventId=1743" | more "eventArguments" "mgmtStationClock=1342600381&hostName=f3240-208-145&hostTimezone=GMT&hostClock=1342596013&hostId=130&hostClockLag=4368" [root@vmlnx208-161 ~]# The actual/pretty condition, in the event detail is constructed dynamically from the code. There is already a bug623749 requesting to expose this condition in a human readable format so that the eventViews are useful. Can you pls add your customers case to this bug ? BTW the database schemas for the views that are exposed are documented in two places, namely. OnCommand Console help Help>Contents>Reports>Database Schema Operations Manager Console help. Control Center> Help >General Help>Database Schema Under this you will find the following 3 Database schema for DataFabric Manager non-historic data Database schema for DataFabric Manager historic data Relationship among fields of various database views Regards adai
... View more
BTW, here is the tool that I mentioned that does more than what OC does. http://support.netapp.com/NOW/download/tools/config_advisor/ Regards adai
... View more
Hi Bengt, I am not sure if un-assigned disk are reported in OnCommand Unified Manager/DFM. I definitely do remember that we report on data, parity, spare,failed and broken disk but not sure on unassigned ones. [root@vmlnx ~]# dfm report view disks help All Disks Report (disks) Shows all disks. Columns: reportLineNumber Line grandparentObjFullName Controller diskName Disk Name-------------------------------------> Gives info if a disk is data/parity/spare/broken/failed check if you get un-assigned as well. aggrObjName Aggregate diskFirmwareRevision Firmware Revision diskVendorName Disk Vendor diskModel Disk Model diskType Disk Type diskShelf Disk Shelf diskBay Disk Bay diskPlexId Disk Plex Id diskTotalMB Disk Size (MB) Default sort order is +grandparentObjFullName. [root@vmlnx ~]# I dont have a setup to quickly try this out to confirm it. Regards adai
... View more