This is how growth rate is calculated. Daily growth rate is the slope of trend line multiplied by number of seconds in a day. The trend line is calculated by performing a linear regression of upto 90 days data. Hope this helps. Regards adai
... View more
IIRC, its going to be supported in the next upcoming release of OM. Though dont know what its going to be called. DFM 4.1 ? or UM 1.0 ? Regards adai
... View more
Hi emanuel, I can see them in the GUI too, and the first post you made shows output from cli. Attached is the screenshot of the gui. regards adai
... View more
There isnt a document as of today for what you are looking. But I would like to understand what you are trying to do by knowing that. So that we would be able to help you much better. Are you trying to run some of the monitors more frequently that it is ? for some specific events that you are interested or what is it ? Regards adai
... View more
Pls use the baseline feature of the thresholds where you have a new button starting DFM 4.0 in PA called suggest which will give you all this. Like mean, median, min , max and 95th percentile etc. regards adai
... View more
It doesn’t take any wildcard. But you can use something like this, if yours is a linux dfm server. for i in `dfm alarm list -q | awk '{ print $1 }'`; do dfm alarm delete $i ; done for I in dfm event list –q | awk awk '{ print $1 }'`; do dfm event ack $i;dfm event delete $i;done. Regards adai
... View more
1) Performance Advisor data collection stopped Run these two api to stop collecting PA data. perf-disable-object-update perf-disable-data-collection 2) The DFM database (Sybase) quiesced 3) The DFM database unquiesced Should refer the sybase guide or documentation. 4) Performance Advisor data colleciton restarted perf-enable-object-update and perf-enable-data-collection But will you be able to still take a consistend snapshot with out snapdrive ? Regards adai
... View more
Some useful reports to create, that you can create to help you better. Volume and Snapshot reserve usage in one report Create the following custom report for the same as below. dfm report create -R SnapReserve -f Volume.Aggregate.Filer=StorageSystem,Volume.Aggregate=ContainingAggr,Volume.Name=VolName,Volume.Total=VolTotal,Volume.Used=VolUsed,Volume.Available=VolAvail,Name=SnapName,Total=SnapResTotal, Used=SnapResUsed -L "Volume And Snapshot Capacity Usage" volume-and-snapshot-usage When should I buy new disks. You can look at "Raw Capacity Used vs Total" graph for each filer to check the physical capacity usage or at a group level. To check the actual usage, see "Volume Capacity Used vs Total". Raw Capacity Total is the some of all disk sizes (data, parity, dual parity and spare disks). Raw Capacity Used is the some of all in-use disk sizes (data, parity and dual parity). How to know if aggr space is completely committed, but the vols on that aggr still have space left ? You can create the following custom report: dfm report create -R Aggregate -f FullName=AggrName,Status=AggrStatus,Used=AggrCapUsed,UsedPct="AggrCapUsed%",BytesCommitted=AggrBytesCommitted,BytesCommittedPct="AggrBytesCommitted%",SpaceAvailable=AggrCapAvail,AvailablePct="AggrCapAvail%",TotalSpace="AggrCapTotal", -L "Aggregate Used vs Committed Space" aggr-used-committed Bytes Committed - Amount of aggregate space committed to flexible volumes. Bytes Committed (%) - Percentage of aggregate space committed to flexible volumes. Used Capacity - Amount of space in the aggregate used by flexible volumes. Bytes Used (%) - Percentage of aggregate space used by flexible volumes. Total Space - Total capacity of this aggregate. How can I find the disk sizes after right-sizing We show the marketing size of the disks and not the right size capacity of the disks in reports. We collect right size information for disks through SNMP (but it does not exactly match the value reported by "sysconfig -r" command). This is available through custom reports through the "UsedSpace" field in the "Disk" catalog. Note: right size information is not available for spare disks because this is not available through SNMP. dfm report create -R disk -f "Name,Filer,UsedSpace:MB ,Size:MB" -L "Right Sized Disk" disk-size-report Regards. adai PS:Thanks to shekar raja for all the reports, that he documented.
... View more
Also, are there any scripts available for automated posting of these type of reports to a website so emailing and unzipping is not needed. By default, these schedules reports are archived in dfm under the following location. Reports Archival Directory /opt/NTAPdfm/reports If you wish to make these reports accessible via web, what one can do is change this default location to a web location where a apache or similar is running.So this can be made accessible via web. The option to change the location is below. [root@lnx~]# dfm options list reportsArchiveDir Option Value ----------------- ------------------------------ reportsArchiveDir /opt/NTAPdfm/reports/ [root@lnx ~]# Regards adai
... View more
First, The reason why these counters aren't available in the "Data Collection Configuration Wizard" is because these are derived counters(calculated stats not in default counter groups) But the wizard is for configuring the counter group counters. For the view part I will have to check. Regards adai
... View more
Have you taken a look at the db views exposed in dfm.This exposes completely what you are looking for. Using the sql views you can make your own calcuation.The dfm db schema is documented in the General Help section, which can be accessed from the web UI. There is also a TR which helps in accessing the same. Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export Regards adai
... View more
The below TR outlines on how to access the exposed DFM and PA data. Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export Regards adai
... View more
Starting Ops-Mgr 4.0 VMware Vmotion and HA are supported.The below text is from the Interoperability Matrix Tool . DataFabric Manager Server 4.0 and above supports VMware VMotion and VMware High Availability features for - VMware Infrastructure 3 version 3.5 - VMware vSphere 4 Regards adai
... View more
As of today, there is no tool to do an automated way of advise on doing migration.But as you said all this is available in dfm and the dfm SDK are available, using which customers can build their own migration suggestor as per there needs. Also the dfm db access is given via sql view which can again be used in making this decision. Even the Performance Advisor data can be exported, to do IO profiling. Have you taken a look at the Performance Advisor view in the NMC that has in-depth performance information about the controller and its objects. Some ppl create custom view like the ones below help them in making the decision. The outputs below are of the cli dfm perf view describe of the custom views created. The Custom_system_summary_view is very similar tothe default system summary view, except that it has some extra countersincluded to show read/write information for network, ops, and latency View Name: Custom System Summary View Applies To: Object type (filer) Chart Details: Chart Name: Network Throughput Chart Type: simple chart Counters in this Chart: Counter: system:net_data_recv Counter: system:net_data_sent ChartName: Average Latency per Protocol Chart Type: simple chart Counters in this Chart: Counter: nfsv3:nfsv3_read_latency Counter: nfsv3:nfsv3_write_latency Counter: cifs:cifs_latency Counter: nfsv3:nfsv3_avg_op_latency Chart Name: All Protocol Ops Chart Type: simple chart Counters in this Chart: Counter: system:nfs_ops Counter: system:cifs_ops Counter: nfsv3:nfsv3_write_ops Counter: nfsv3:nfsv3_read_ops Chart Name: CPU Utilization Chart Type: simple chart Counters in this Chart: Counter: system:cpu_busy The all_volumes_summary_view is a bar chart thatsummarizes throughput, ops, and latency for all volumes on a physical storagesystem. Set the perfMaxObjectInstancesInBarChart option to 500 so that we make sure to get all volumes included. These bar charts can be converted to line graphs so that they can historically see which volumes on a given physical storage system are driving the most I/O over time. View Name: All Volumes Summary View Applies To: Object type (filer) Chart Details: Chart Name: IOPs Chart Type: bar Number of object instances: All Top or Bottom Instances: Top Counters in this Chart: Counter: volume:total_ops Chart Name: Throughput Chart Type: bar Number of object instances: All Top or Bottom Instances: Top Counters in this Chart: Counter: volume:throughput Chart Name: Latency Chart Type: bar Number of object instances: All Top or Bottom Instances: Top Counters in this Chart: Counter: volume:avg_latency .The all_aggregates_summary_view is similar to volume summary, but at an aggregate level. The idea here is to compare which aggregates are the busiest, both in terms of total transfers and disk busy percentage. This is to help identify if disk utilization is a potential bottleneck on the system. View Name: All Aggregates Summary View Applies To: Object type (filer) Chart Details: Chart Name: Transfers Chart Type: bar Number of object instances: All Top or Bottom Instances: Top Counters in this Chart: Counters: aggregate:total_transfers Chart Name: Avg Disk Busy Chart Type: bar Number of object instances: All Top or Bottom Instances: Top Counters in this Chart: Counter: aggregate:pa_avg_disk_busy Hope this help, nevertheless its a tool to build right away. Regards adai
... View more
There are few things to do. First stop the agent monitoring. [root@lnx]# dfm options list | grep -i agentMonInterval agentMonInterval 2 minutes [root@lnx]# dfm option set agentMonInterval=off Changed agent monitoring interval to Off. [root@lnx]# dfm options list | grep -i agentMonInterval agentMonInterval Off [root@lnx]# If there is any already in-progress walk stop them. To find the list of walks that are in progress, use the below command. dfm srm path list From this stop the ones that are in progress using the below cli. [root@lnx]# dfm srm walk stop help NAME stop -- Stop SRM Walk SYNOPSIS dfm srm walk stop { all | <object> ... } [root@lnx]# To stop any scheduled kick off of the walk do the following. unschedule the schedules from the path. Regards adai
... View more
Yes, shiva's correct, When you delete a object say a filer, all aggr, volume, qtree, and lun(basically its child objects) are not monitored any more untill they are un-deleted(re-added). If your intension is only to avoid a specific event they go with changing the threshold setting for individual objects. If your intension is to not to monitor the entire volume for any event then go with delete.Also keep in mind when we mark it as deleted, all the child objects are also automatically marked deleted and not monitored anymore. Like if a volue is marked deleted, its qtrees and luns are also not monitored anymore. similarly if its a aggr, all its volume, qtree and lun are not monitored. regards adai
... View more
Hi Emanuel, Some of answers and my thoughts on your questions. > If I change the schedule in the Data Set, then the Data Set stays the same and there should be no interuption if mirroring activities. Yes, changing the schedule does not affect an in-flight mirror job.It takes effect once this job completes.Once a job is fired its on it own untill it completes or the user decided to kill it. > If I remove the source volume form the data set and re-add it to another dataset, am I at risk of a re-baseline? You, mean remove the source and dst volume and import it back to another dataset ? If so no rebaseline. But do keep in mind the old backup version of this volume still stays in the old dataset and will expire on its own. Also keep in mind, only one scheudle can be associted with a connection of a dataset. So if you create a mirror dataset with 20 volumes you can have only one schedule and not different ones for each of the volume. Infarct this is the paradigm of the dataset where you groups objects of similar type, requiring same set of things and manage it as one single entity(dataset) as apposed to 20 or 200 volumes. regards adai
... View more
Another way not to get the events for the specified objects are delete the object from dfm using the delete commands. Like, if you know a controller is down for tech refresh for, say a week, you can stop getting alerts from this controller by just doing the below. dfm filer delete <filername-or-id-ip>.By that way its only marked deleted in the dfm database and not completely removed. After one week if you would like to monitor it again, dfm filer add <filername-or-ip>. This can be done for any objects like, volume qtree, aggr,lun etc. Regards adai
... View more
Hi Fletch, What was the version of the tool that you used ? That will help in narrowing the problem. I think for the popup message, you used version 2.1 which give the choice of HTTP/HTTPs to connect to the storage system. Is the following options enabled on your source storage system ? options httpd.admin.enable on options httpd.admin.ssl.enable on Regards adai
... View more
The negative number could be because of the volume size being more than 2TB and the int declaration in the tool might be 32 bit. Thanks for the volume size hint. Will check with the tool and get back to you. With respect to the popup you are getting, can you get the version command output from your filer and the screenshot of the popup so that we can replicate it inhouse to see whats happening ? Regards adai
... View more
Hi There is a tool availabe which will estimate time for completion of a given snapmirror transfer.Take a look at the below link. http://communities.netapp.com/message/41930#41930 Regards adai
... View more