VMware HA and Vmotion are supported starting DFM 4.0. Pls refer to the IMT. http://now.netapp.com/matrix/configuration/showDetailsPage.do?configVersionId=52509&activateNotesTab=true Regards adai
... View more
Hi Babar, You can use the report schedule feature, to schedule the reports and email in any of the supported formats to one or more ppl. The navigation for the same is Control Center-> Reports-> Schedule. The FSRM data is not aggregate ones rather point in time unlike the dfm history for volumes and performance data. Below is the example of a report schedule on a monthly basis. # dfm report schedule list ID Name Report Schedule Enabled ------ -------------------- -------------------------- -------------------- -------- 1191 Monthly Aggr disks-aggr Last day of the month Yes # dfm schedule list ID Schedule Name Schedule Description ------ ----------------------------------- --------------------------------------- 1190 Last day of the month Monthly on day 30 at 0 hours 0 minutes. # Regards adai
... View more
You can use the below cli to generate your custom event for the script, in DFM. # dfm event generate help NAME generate -- generate a particular user-defined event SYNOPSIS dfm event generate [ -t <timestamp> ] <event-name> <source> [ <event-message> ] DESCRIPTION This command is used to generate custom events. Script plugins can use the command to generate built-in script status events such as script:warning-event. This command cannot be used to generate other built-in events. timestamp: specifies the event generation time in YYYY-MM-DD,HH:MM:SS format. If not specified, the time when the command is invoked is used. event-name: specifies the name of the event. source: specifies ID/name of source object of the event. event-message: A message specific to this event. This message will be displayed as part of event details. # Regards adai
... View more
HI Jamey, Are you talking about the physical loop connectivity between the head and disk shelf ? If so we don’t discover those objects, so you won’t be able to do it. Regards adai
... View more
Hi Marshall, Can you give us below info as what kind of relationships are they ? SV/QSM/VSM. Is you snapmirror monitor running fine ? You can check the same by running dfm host discover <fielrname-or-id> . Are you finding any kind of erros in the smmon.log under <installdir>/NTAP/DFM/log Regards adai
... View more
Hi, You can get the graphs on daily,weekly,monthly,3months,1year from the Ops-Man Web UI. Graphs Beyond 1yr can be got from the dfm graph cli and the cli with help will give you all options. There is no graph for NFS latency or Iops, but there is a report aggregated at d,w,m,y etc. Below are the report names for the same. dfm report | grep -i performance storage-system-performance-summary performance summary of storage system storage-system-NAS-performance-summary NAS performance summary of storage system storage-system-SAN-performance-summary SAN performance summary of storage systems aggregates-performance-summary performance summary of aggregate volumes-performance-summary performance summary of volume volumes-NAS-performance-summary NAS performance summary of volume volumes-SAN-performance-summary SAN performance summary of volume qtrees-performance-summary performance summary of Qtree luns-performance-summary performance summary of LUN disks-performance-summary performance summary of Disks array-luns-performance-summary performance summary of array LUNs vfiler-performance-summary performance summary of vFilers For latency and other counters in graphical way,you will have to Performance Advisor. Download the same from the web-ui, Control-Center-Setup-Download Management Console. Regards adai
... View more
Upgrades up to two major releases are supported at one-shot. If it’s from 3.6 to 4.0 then you might have to upgrade to 3.7 or 3.8 and then to 4.0 but not in your case. Regards adai
... View more
There is no way to control the over size of the db, there is way to monitor the space, using the below event. dfm eventtype list | grep -i "dfm.free.space" management-station:enough-free-space Normal dfm.free.space management-station:filesystem-filesize-limit-reached Error dfm.free.space management-station:not-enough-free-space Error dfm.free.space Regards adai
... View more
Hi Emanuel, Can you get the screenshot, the filer ontap version and the dfm host diag for the filer ? Also the dfm report view disks <for the filer in question> Regards adai
... View more
Hi Emanuel, The main usecase or intent of purge history is for purging some spikes in the history graphs so that your trending is fine. I was talking about the option below. [root@lnx186-118 ~]# dfbm option list Option Value ---------------------------- ------------------------------ backupDirMonInterval 8 hours backupScriptRunAs discoverNdmp Enabled ndmpMonInterval 30 minutes purgeJobsOlderThan off snapvaultLagErrorThreshold 2 days, 0:00 snapvaultLagWarningThreshold 1 day, 12:00 [root@lnx186-118 ~]# this will purge all dfbm, dfdrm and dfpm jobs older than the days specified. This will reduce your db size only if Backup/Disaster Recover/Protection Manager is used. Else this will be on no help. Regards adai
... View more
Hi Emanuel, When you move your db you must move three things namely, the db (both monitor.db and monitor.log),the perfdir and the scripts. As a db backup archive based or snapshot based backup contains these there. If all these are not on a LUN or local drive db backup will fail. Now coming to yor question the syntax you used is incorrect. I moved it in a RHEL. [root@lnx~]# dfm datastore setup -d /adai/data -l /adai/data -p /adai/perf -s /adai/script Creating the destination data directories. Required space for data:789 MB, Available space:115 GB Stopping all services and jobs... SQL Anywhere Stop Server Utility Version 10.0.1.3960 Copying database files to /adai/data. Copying perf data files to /adai/perf. Copying script output files to /adai/script. Changing database configuration settings ... Changed dbDir to /adai/data. Updated dbLogDir to /adai/data. Starting sql service... Using service restart timeout = 180 seconds. Changed perfArchiveDir to /adai/perf. Changed scriptDir to /adai/script. Changed databaseBackupDir to /adai/data. Starting services... [root@lnx~]# Regards adai
... View more
Hi duranti Are you referring to this ? http://communities.netapp.com/message/31422#31422 If its something else can you post a link to the same. The snapmirror lag and error thresholds have to be changed in the dfdrm policy and not the dfm options. Here is the steps to do it. Go to Ops-Mgr WebUI Disaster Recovery Tab and select the view as Volume SnapMirror Relationsip and click on the Replication policy Column as show in the pic replication. This will open up the Edit policy page where you will find the options for lag warning and error threshold. By default they are 1.5 days and 2 days respectively. Regards adai
... View more
Hi Kd, You can add your /vol/myvolume to the dataset having Backup and mirror Policy, as primary member. PM will take care and create 250 Snapvault relationships in the Backup filer and create one Snapmirror relationships in the mirror node. Any further volumes added in the /vol/myvolume is automatically detedted by PM and SV relationships are created. But still PM does not supported the whole volume snap vault of /vol/myvolume backedup to a single qtree in the destination volume thus creating one SV relationships. Hope I clarified your doubts. Regards adai
... View more
Hi Emanuel, Pls use the dfm datastore setup command to move the dfm db and its contents. # dfm datastore setup help NAME setup -- configure DataFabric Manager data on a different location SYNOPSIS dfm datastore setup [ -n ] [ -f ] { dfm-data-dir | [ -d dbDir ] [ -l dbLogDir ] [ -p perfArchiveDir ] [ -s scriptDir ] [ -r reportsArchiveDir ] [ -P pluginsDir ] } DESCRIPTION -n specifies that the data present at target location will be used without copying original data. -f specifies that the data should be deleted from target location if it is not empty. dfm-data-dir specifies DataFabric Manager target root directory for data. -d specifies the new location for database data file. -l specifies the new location for database transaction log file. -p specifies the new location for perf data files. -s specifies the new location for script output data. -r specifies the new location for report archival data. -P specifies the new location for Storage System configuration plugins. Example: dfm datastore setup /opt/dfmdata/ dfm datastore setup -d /opt/dfm/data/ -p /opt/dfm/perf/ -s /opt/dfm/script/. # This will take care of starting or stopping the service if required. Regards adai
... View more
Hi Abhishek, TR 3505 says it a 3TB which is correct. As 2040 is a better platform than 2050, based on TR 3505. This looks like a bug to me. If you have a 2040 you can test this, with 7.3.2 Regards adai
... View more
Hi To upgrade from, 3.7.1 to 4.0 first create a backup using the below command in your 3.7.1 machine dfm backup create. This will take few minutes to hours depending on your database and perfdata size. Now on the Win2K8 install 4.0 I would suggest you to install 4.0D12 instead. Copy the backup created in 3.7.1 to Win2k8 either directly or by using some cifs share or shared drive. Then on the 4.0D12 machine run the below commad. dfm backup restore <backup filename with the location> This will do an upgrade of the dfm db to 4.0 which will take time as there is lot of changes,Also if you have netcache being monitored by dfm they will be removed, only then the upgrade can happen to 4.0D12 Else you will have to stay on 3.7.1.As 3.8 removed the support for Netcache . Also you would need to take care of space requirements in your new 4.0D12 machine as you would require twice(less but to be on safer side) the amount of storage you required in 3.7.1. As 4.0D12 will create Peformance Advisor trend files and upgrade some counter groups and widen some counter groups. Also note that the db back does not contain the following,copy the same from old to new server if you have or had info in them. The following folders are not part of the archive-based backup: Reports This folder contains the output of scheduled reports. You can use the dfm options list reportsArchiveDir command to locate the reports folder. Data This folder contains the DataFabric Manager database backups and the monitordb.db and monitordb.log files. You can use the dfm options list databaseBackupDir command to locate the data folder. Note You should not copy monitordb.db and montirodb.log files to DataFabric Manager server 4.0 This is required only if you would want to copy your old backup to the new server.. DataExport This folder contains the output of dfm data export command. You can use the dfm options list dataExportDir command to locate the export folder. Regards adai
... View more