Hi Owen, Are your filers running DOT version 7.3.2 ? If so then you are hitting the below bug. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=383376 For the life of me I cant find a way to change the setting so that it will display the alert so that we can see that there was an issue in the past that needs to be checked/acknowledged? eg. I want the count to say there was 1 critical event (even though it has been fixed) You will have to check the events history in the summary page of each object under the events, for the same. Also is there a way to have OpsManager use your Windows/AD credentials that you are already logged in with instead of having to log into OpsManager every time you click an alert? eg. pass through authentication? Yes you can use windows AD, but you will have to assign Global Full control Role to this AD user to do all you want in DFM. Regards adai
... View more
Hi Scott, Just wanted to know the status of the datasets since we did the necessary actions to remove the redundant relationships. Regards adai
... View more
Hi Andew, Why dont you try and share us your experience ? These are some of the advantages of using Secondary Space Management. Automatic resource Selections based on the Provisioning policy(if no provisioning policy,legacy provisioning of secondary volumes as per PM) attached to the secondary or backup node. Efficient provisioning of the secondary volume(new destination volume) as per Protection Manager requirements. Rebase-line from secondary and not from primary unlike earlier version of DFM. Backups from primary to secondary is not suspended during the baseline of new secondary volume( Snapmirroirng the data from old secondary volume to new secondary volume) suspended briefly during cutoff from old volume to new secondary volume Backup versions created in the old secondary volume are also moved to the new volume Old Backup volume can be deleted after migration user is given the following options for cleanup(cleanup_after_update,cleanup_after_successful_migration,no_cleanup) Ops-Mgr history for the volume is copied over. Regards adai
... View more
Hi Emanuel >3. On another DFM install, the monitor db is 5.1GB in size but the backups are 17 GB in size ( from the previous night ) A dfm backup contains three things. dfm db(both monitor.db and monitor.log) perfdata dir scripts-plugins( this contains the scripts and its output) So you 5.1Gb might be the size of dfm db the difference of 12Gb should be your perfdata. Regards adai
... View more
Finally, from a volume details page, you can manually select two snapshots and compute the delta. Its Volume Snapshot Details page that too when view at an individual volume level and not at filer/group levels. I don't know if that's available from the CLI. You can use the dfpm dataset snapshot reclaimable cli.Its works for even for volumes which are not part of a dataset. The only cavet is that you will need Provisioning Manager license. [root@lnx]# dfpm dataset snapshot reclaimable help NAME reclaimable -- compute space reclaimable from deleting snapshots in a volume SYNOPSIS dfpm dataset snapshot reclaimable <volume-name-or-id> <snapshot-name-or-id> [ <snapshot-name-or-id> ... ] DESCRIPTION Compute space that can be reclaimed if the specified set of snapshots are deleted from the given volume. [root@lnx]# Regards adai
... View more
Hi Is there a log file somewhere that will show the email was sent to the user? Yes.You will find a log file named alert.log under /opt/NTAPdfm/log, if installed with default location else under <installdir>/NTAPdfm/log The contents will be like this. May 12 14:23:38 [dfmeventd: INFO]: [3364:0x6ffabb0]: alarm 1, event 83148, Aggregate Full on aggr0: sending email alert to user@domain.com. took 2 seconds Where alarm 1 is the alarm id. event 83148 is the event id Aggregate Full is the event name. aggr0 is the object name. user@domain.com is the email address set in the alarm. If not, do I have a way of being notified X days after a threshold has gone unresolved? Yes.Did you try setting up a repeat notification ? [root@lnx]# dfm alarm create help NAME create -- create a new alarm, triggered by particular events specified by the options SYNOPSIS dfm alarm create [ -E <email-to-addresses> ] [ -F <page-to-addresses> ] [ -C <event-class> ] [ -A <admin-login-name> ] [ -P <page-to-admin> ] [ -T <trap-to> ] [ -s <script> ] [ -u <alarmScriptRunAs> ] [ -g <group> ] [ -h <event-name> ] [ -l <time-from> ] [ -m <time-to> ] [ -v <event-severity> ] [ -r <repeat-notify> ] [ -i <repeat-interval> ] [ -b <disabled> ] [root@lnx]# where -r <repeat-notify> as yes and -i <repeat-interval> as the time after which if the condition of the event still persist send another alarm. This can also be done from the webui by going Control Center->Setup->Alarm Advanced Version. Regards adai
... View more
IIRC datatransfer jobs which complete less than 3 minutes show bytes transferred as unknown in the NMC jobs page as well as the cli dfpm job details page. Regards adai
... View more
Case #2 does not apply here. I'm told by the customer though that Operations Manager crashed near the end of the initial dataset creation, so it is possible that something in that process did not finish correctly even though it appeared to complete. After some logging and investigation we found that this is due to the sever crash. Provisioning Manager did not update itself that this volume was provisioned by it. So this case is due to the fact the Prov-Mgr things that this volme is not managed by it. Regards adai
... View more
Hi Scott, By definitions a redundant relationship is a one that duplicates the needs of a given PM dataset connection which is already taken care by another existing relationship. Can you get the output of the following? dfpm dataset list –m <dsname> dfpm dataset list -R <dsname> dfpm dataset list -l <dsname> dfpm relationship list –r dfm options list | grep -i reaper Regards adai
... View more
Hi Scott, What is the version of SDW you are using? See if the symptoms you are experiencing is same as the one in the FAQ. http://now.netapp.com/NOW/knowledge/docs/DFM_win/rel40/html/faq/index.shtml#_9.16 Regards adai
... View more
The report works for all QSM/SV/VSM relationship created by DFBM or DFDRM or Protection Manager or Filer Cli. what I am hoping to do is list out the number of bytes transferred on each backup For that you must look at the dp-transfer-backup-individual or dp-transfer-mirror-individual report that give how data was transfer for each relationship. We don’t have a report that gives how much data was transferred per schedule of a dataset. But that can be got from the dp-transfer-mirror/backup-individual report by doing a sum of all the data transferred for each relationship that belong to the dataset. Regards adai
... View more
Yes,Pls look at the below reports. # dfm report list | grep -i dp-transfer dp-transfer-backup-individual DP Transfer Backup, Individual dp-transfer-backup-daily DP Transfer Backup, Daily dp-transfer-backup-weekly DP Transfer Backup, Weekly dp-transfer-backup-monthly DP Transfer Backup, Monthly dp-transfer-backup-quarterly DP Transfer Backup, Quarterly dp-transfer-backup-yearly DP Transfer Backup, Yearly dp-transfer-mirror-individual DP Transfer Mirror, Individual dp-transfer-mirror-daily DP Transfer Mirror, Daily dp-transfer-mirror-weekly DP Transfer Mirror, Weekly dp-transfer-mirror-monthly DP Transfer Mirror, Monthly dp-transfer-mirror-quarterly DP Transfer Mirror, Quarterly dp-transfer-mirror-yearly DP Transfer Mirror, Yearly dp-transfer-dataset-daily DP Transfer Dataset, Daily dp-transfer-dataset-weekly DP Transfer Dataset, Weekly dp-transfer-dataset-monthly DP Transfer Dataset, Monthly dp-transfer-dataset-quarterly DP Transfer Dataset, Quarterly dp-transfer-dataset-yearly DP Transfer Dataset, Yearly # Regards adai
... View more
Hi Bruno, Basically throttle is to restrict the job that starts between the hours specified. They are not dynamic, to change to unlimited if the job spans beyond the throttle hours. >If a transfer started at 7PM, it will use the throttle. If at 9PM, the transfer is not finished, will it use all the available bandwidth available at that time ? No. Its until the job is completed. Same question for the opposite: if a transfer started at 7AM which used all the available bandwidth and is not finished at 8AM, will it be limited at 8AM ? No. Throttle is fixed for a job. Regards adai
... View more
Hi Ken, options ndmpd.preferred_interface is when you use ndmp extensions to create SV relationships. When you use the cli(SnapVault) change the command like the one below. snapvault start -S PNFR002-e0b:/vol/pnfr002_VMFS_intnr_24e/intnr_24e PNFR011-e0b:/vol/SV_intnr_24e/SV_intnr Or replace the hostnames with ip address. Regards adai
... View more
Can you get the output for the following command? dfpm dataset get ? Is there some kind of workaround to get Prov Mgr to recognize these imported volumes such that they can be used for future provisioning? If not, there needs to be! The answer is No today, but there are request from customers to do so. Pls add your customer to the already existing RFE for the same. Now coming back to your question, if it was not case 1 was it case 2 as I mentioned in my previous post? Regards adai
... View more
This could be due to one of the reasons mentioned below. 1.If this volume was imported into the dataset, Then further provisiong will not use this volume as it was not created by Provisioning Manager. But still it will apply all provisioning policy setting of the dataset like snapautodelete, volume autogrow and enabling dedupe.etc. 2.If there is name conflict between the qtrees like, for example The first provisioning request was to create a qtree named qt1 in the dataset ds1. then provisioning manager will create a volume named ds1 and create a qtree named qt1 inside the volume. If the second provisioning request was to create a qtree named qt1 again then to over come the ontap restrictions of not allowing to create qtrees with same name in one volume, Provisioning Manager will create one more volume named ds1_1 and then create the qtree named qt1. to disambiguate it. Regards adai
... View more