As Earls, already said, it not possible to do it within PA. But DFM as a whole offers an automation framework called alarms script which gets executed when an event condition is breach. This script provide environment variables and can be used to script what you like. The below FAQ give an simple example of the environment variables and its details. https://library.netapp.com/ecmdocs/ECMM1278650/html/faq/index.shtml#_7.5 Note this is how the script must be listed in the alarm script. It should have the full path of the script interpreter. lnx~ # dfm alarm list Alarm 1 Group Global Event Severity All Event Name All Time From 00:00 Time To 23:59 Email Addresses Script Name /usr/bin/perl /opt/script.pl User Script Runs As root You can also generate a custom event of your own based on the script using dfm event generate Regards adai
... View more
Hi Alex, I recommend you to upgrade to 5.0.2 to take advantage of the 64bit architecture and free licensing. You case could get this prioritized to upcoming releases. Regards adai
... View more
Hi Alex, There is no way doing it today in the product. What version of DFM are you using ? Can you pls create a case and add it to this RFE 300668 ? Regards adai
... View more
Can you give more details about your environment like dfm version Host Package Version vSphere version And the exact error message Regards adai
... View more
Hi Martin, There is no way to edit the alarm email that is sent out by dfm. But generally this is how the event email link looks like https://dfm server name:8080/#event-details:event-id=10 What version of dfm are you using ? Regards adai
... View more
Hi Let me clarify what you want to do. Are you looking for the NMC way of exporting individual counters ? As you stated ? It's working if i use graphical interface : File => Export => Select Counters dfm perf data retrieve is to retrieve data from a performance view and doesn't allow individual counters. The dfm perf export counter add is to export the list of counters for a defined period to third party tools or database. You can find the TR named Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export and other TR related to dfm in this link. OnCommand(DFM) and its related Technical Reports Regards adai
... View more
Hi Pls use the dfm perf data retrieve cli to get the data for the view using cli. NAME retrieve -- This command allows you to extract the counter data with supported statistical calculations on them. SYNOPSIS dfm perf data retrieve { [ -o object-name-or-id ... ] [ -C perf-counter ... ] [ -V view-name ] } [ -d duration ] [ -b start-time ] [ -e end-time ] [ -M month ... ] [ -D weekday ... ] [ -T time-range ... ] [ -m statistical-method [ -P percentile-value ] [ -S data-advance-method ] ] [ -s sample-rate ] [ -x output-format ] [ -R ] DESCRIPTION This command retrieves performance data for the specified counters and object instances. These counters and instances can be specified explicitly using the -C and -o options or implicitly by specifying the view-name using -V option. When the view specified is associated with object types, the object instance has to be explicitly specified using the -o option. When the view specified is associated with object instances, this option is not mandatory. If -o option is specified in this case, the specified object instances are considered, ignoring the instances in the view. If counters are explicitly specified along with the view, the counters in the view are ignored and the explicitly specified counters are considered instead. For finer control on the time, a filter with months, days and time range with in a day can be specified. Only those time stamps that satisfy the filter will be shown in the output. On the resulting output of counter data, statistical computations like minimum, maximum, mean, and value_at_percentile can be computed. Description of options: -o Object instance for which data is to be retrieved. Multiple object instances can be specified. Specifying a parent will retrieve data for all its children. -C A counter of the form object-name:counter-name (ex. system:cpu_busy). Multiple counters can be specified. -V View Name. A view name can be specified, instead of specifying the counters and instances explicitly. -b Start time. Format: "yyyy-mm-dd hh:mm:ss". If this option is not specified, the start time will be the time of the oldest record. -e End time. Format: "yyyy-mm-dd hh:mm:ss". If this option is not specified, the end time will be the time of the newest record. -d Duration for which data is to be retrieved (in seconds). The time is calculated going backwards from the current time. When this option is specified, start-time and or end-time are ignored. -s Sample rate (in seconds). This interval will be used to consolidate the output data. The available data will be split into regions as specified by the sample rate and the last sample in each of those regions will be displayed. Also used for window calculation for metrics. -x Specifies the format to display the output. Possible values are Legacy and TimeIndexed. Default value is Legacy. -R If specified, data is rolled-up to the nearest minute. Applicable only when the output format is TimeIndexed. -M Filter based on month. Examples: Jan, Feb, etc. Comma separated multiple values can be specified. -D Filter based on day of week. Examples: Mon, Tue, etc. Comma separated multiple values can be specified. -T Filter based on time range in a day. Example: 16.00-21.55. Comma separated multiple values can be specified. -m The statistical computation. Valid values are min, max, mean and value_at_percentile. -P For value_at_percentile type of calculation, this contains the percentile value. -S For computing on fixed size data, this defines the method to advance the chunks of data. Valid values are simple, step and rolling. Default value is simple. Valid only when a statistical-method is specified. Simple and step methods are available for all statistical methods. Rolling is valid only for mean statistical method. Regards adai
... View more
Hi Mark, I Recommend you to upgrade to 5.0.2 which has some critical securtiy related bug fixes and open a case with NetApp Support on the same. Regards adai
... View more
There is no specific intent on this. The only constraint is today in PM we cannot create a custom topology other what it provides out of the box. The only other way of getting this custom topology is to construct this using the exiting policy. As I said earlier you can have Option1 DatasetA created using DR Mirror Protection Policy for the primary. DatasetB created using Backup Then Mirror Protection Policy for the same primary. Option2: DatasetA created using DR Mirror and Backup Protection Policy for the primary. Dataset B created using Mirror policy from the SnapVault destination. Is the design intent of two separate policies to decouple the DR policy from any backups? Can you elaborate on what you mean by any backups ? I am not quite clear on this. Does this enable a specific behavior that I should show my customer? There in no difference I can see, as irrespective of how we build the topology we would still have a VSM base and SV base snapshot on the primary.' Regards adai
... View more
Hi Cecil, As of today OnCommand Unified Manager doesnt do it. Can you explain us little more on the same why its better why its need etc ? Regards adai
... View more
Hi Mauro, What is the dpReaperCleanupMode options value ? From the output I can see that this relationship is not being managed in a dataset. So the dpReaperCleanMode doesnt have anything do here as well. To me it looks more like a bug. Can you see what is being thrown in the svmon.log files under your log directory of DFM installations ? On a side note I strongly recommend you to upgrade to 5.0.2 Regards adai
... View more
Hi Stephen, All you will have to do to import existing Volume SnapMirror Relationship is the following. Create a dataset Apply Mirror protection policy Go to External Relationship Page and select the VSM relationships you want to import. Click on import and select the above created data which has mirror policy. If the relationship are QSM/SV you will have to use a dataset with Backup Protection Policy and not Mirror Protection Policy. Regards adai
... View more
Thanks Niels, I wasnt aware that we have AV related counter under diag more in our counter manager using which we could monitor them. I was always thinking it in terms of Ops-Mgr. I will add this to my arsneal. Regards adai
... View more
Hi Marcinal, This options only purges those events which are no more affecting the status of any object that is being monitored in dfm. So if there is an event that affects the objects status like a volume/qtree such events are not purged even if they are beyond the specified purge interval. Regards adai
... View more
Hi Earls, Theoretically we dont have an hard limit. But the Number of datasets increase the load on the conformance, protection status and jobs scheduled. As you already know there is a limit to the # of concurrent scheduled jobs. Also the number of jobs is also decided by # of semaphore kernal thread available in the host OS in case of linux. More the # of dataset also slows down UI response times when NMC is used, due to the associated status loading while listing the dataset. Is there are reason why you would want to create move than 400 dataset. How many relationship are you trying to manange ? Regards adai
... View more
Hi Vladimir, All of the operations that could be done via NMC can also be done from CLI. dfm perf view create is the cli to create custom views. [root@vmlnx ~]# dfm perf view create help NAME create -- create a new view SYNOPSIS dfm perf view create [ -o <perf-object-type> ] [ -S ] <view-name> DESCRIPTION create a new performance view A performance object type can be associated with the view by specifying the '-o <perf-object-type>' option. Use -S to create a view with an events block. [root@vmlnx ~]# dfm perf help will give you the list of all cli associated with Performance Advisor. Regards adai
... View more
Hi Is there are reason why you installed 4.0.2 now when we already have 5.0.2 which is far better than 4.0.2 in terms of features and scalability. In 5.0.2 your core license will enable you all features like protection/provisioning and that to upto 250 nodes. If for some reason you still have to stay in 4.0.2 pls use the master license key to get all this feature. http://support.netapp.com/NOW/knowledge/docs/olio/guides/master_lickey/ Regards adai
... View more
What version of DFM are you running ? Are you on 5.0 ? If so you wont see that as starting Oncommand 5.0 you have only one license namely core license and that enables all features like protection, provisioning etc upto 250 nodes. Regards adai
... View more
Hi Richard, As you already figured out, Protection Manager only support 3 kinds of topology namely. Simple Backup/Mirror---Two node topology---Simple Backup then Mirrror/Mirror then Backup/Chain of Mirrors--Three node topology or Cascade Backup and Mirror----Three node topology or Fan-Out. But there is no single policy as the topology you have depicted. But this is one of the famous topology which many customer are implementing. Some implement the DR Mirror using Metro Cluster and use the Protection Manager policy for backup then mirror to achieve this topology. The easiest way to achieve this topology is using data set with 2 of the supported protection policy in PM. DatasetA created using DR Mirror Protection Policy for the primary. DatasetB created using Backup Then Mirror Protection Policy for the same primary. Regards adai
... View more