Can you get the event detail and the output of the following command for the dataset on which this event is generated ? dfpm dataset list –R <dataset-name-or-id> Regards adai
... View more
Hi Reid, The option pmOSSVDirSecondaryVolSizeMb is for PM just to estimate how much space would be required for the OSSV to be backed up, so PM needs to check if the RP contains that much free space. The dpDynamicSecondarySizing options is not applicable to dataset containing OSSV.But if you apply a secondary provisioning policy to the dataset, it is provisioned to the dedup limits of the platform and Ontap version and not to the containing aggr. Unless otherwise the dedupe limit is more than the containing aggr size. Below is the conformance message for a dedupe case. Conformance Results === SEVERITY === Information: Provision a new flexible volume of 20.0 GB from aggregate 'f2020:aggr0'(6868). === ACTION === Provision flexible volume (backup secondary) of size 20.0 GB === SEVERITY === Information: Enable deduplication on flexible volume 'VolToBeProvision:test_lnx' (11) === ACTION === Enable deduplication on flexible volume. === SEVERITY === Information: Create backup relationship(s) between 'lnx:/apitest' and new volume to be provisioned from resource pool(s) 'pri_103' (7098). === ACTION === Create backup relationship(s) for dataset 'test_lnx' (19135) on connection 1. Below is my aggr size. Aggregate total used avail capacity aggr0 454GB 20GB 433GB 5% aggr0/.snapshot 23GB 0GB 23GB 0% Regards adai
... View more
Can you just paste the fields of your custom report below so that we can see what it is? Use the below command to list the field in your custom report. dfm report list -C ID Report Name Description -
... View more
Hi Avi, dfm only identifies the file type that are listed in the below cli. dfm srm filetype list Go to your servers and do a ls -l in linux/solaris environment and dir in windows environment to find out the file types Now do a diff between the dir or ls-l and the output of the dfm srm filetype list. And add each of the file types that are in the server where you are doing a srm file walk to dfm using the below cli. dfm srm filetype add Regards adai
... View more
Hi Rich, You can also use the cli which does the same thing as the UI which Ehrhart mentioned. dfm host password set help NAME set -- modifies the password of local user on storage system/vFiler. SYNOPSIS dfm host password set [ -u ... ] DESCRIPTION -n: if specified, password will be modified only on direct members of the group. -t: type of appliances whose password needed to be changed. The valid values for -t type option are :'filer' and 'vFiler'. -R: Maximum number of retries. -u: local user name on the host. Applicable only for storage system and vFiler. -o: old password of the local user on the host. Applicable only for storage systems and vFilers. This option is mandatory in the following cases: 1) For storage systems running Data ONTAP version less than 7.0. 2) For all vFilers. 3) For storage systems without login set. 4) For users not having DFM.Console.Execute capability. When specified, old password is always considered. -p: specifies the new password. dfm host password save help NAME save -- update user-name and password for one or more applainces in the DataFabric Manager server. SYNOPSIS dfm host password save -u ... DESCRIPTION -n: if specified, password will be modified only on direct members of the group. -t: type of appliances whose password needed to be changed. The valid values for -t type option are :'filer' and 'vFiler'. -u: specifies the user name. -p: specifies the new password. # Regards adai
... View more
Hi Reid, Use the cli command below. dfbm primary host delete help NAME delete -- delete a managed primary host SYNOPSIS dfbm primary host delete ... # Regards adai
... View more
Hi bipul, I think his question is, is there a way I can delete the data collected for volume TEST1 so that PA starts collecting for volume test1. Regards adai
... View more
leroy wrote: Thank you for the reply. The responses help put me in the right direction, but I would like some clarification on some of your points. Yes. Operation Manager support creation of reports that have performance fields whose data come from Perf Advisor.Below is the list of some canned reports that are availabe. I found the ability to export the canned graphs, but can operations manager (or performance advisor) export the graph image outside of the canned views. The graphs can be saved as images via the NMC but the customer would like to export multiple graphs on a weekly basis via the command line.? There is no way in the cli to save a view/graph in NMC. Yes use the below cli to do the same.
dfm perf view retrieve [ -a <appliance-name-or-id> | -g <group-name-or-id> ] [ -o <perf-object> ] [ -i <perf-instance> ] <view-name> <starttime> <endtime> Will this allow me to export the image file (similar to dfm graph) for data collected via Performance Advisor? No.This will help you retrive the data presented in the view but not as an image file regards adai
... View more
Hi Mike, I am not clear on this question. Can you give an example? Are you saying you want make the default views as custom views? If so why? What is that you are trying to achieve. Regards adai
... View more
What application are you running? Is it over iSCSI or FCP ? What is your Disk type and RPM FC or SATA 10K or 15K ? With PAM cards or without? Are your application users finding delayed response? What is the Iops ? Regards adai
... View more
1) Is there a way to export graphs on counters not already graphed in Operations Manager (specifically NFSv3 Latency) over a one week period Yes. Operation Manager support creation of reports that have performance fields whose data come from Perf Advisor.Below is the list of some canned reports that are availabe. # dfm report list | grep -i perf storage-system-performance-summary performance summary of storage system storage-system-NAS-performance-summary NAS performance summary of storage system storage-system-SAN-performance-summary SAN performance summary of storage systems aggregates-performance-summary performance summary of aggregate volumes-performance-summary performance summary of volume volumes-NAS-performance-summary NAS performance summary of volume volumes-SAN-performance-summary SAN performance summary of volume qtrees-performance-summary performance summary of Qtree luns-performance-summary performance summary of LUN disks-performance-summary performance summary of Disks array-luns-performance-summary performance summary of array LUNs events-perf show current performance events events-history-perf show all historic performance events vfiler-performance-summary performance summary of vFilers # And these can be viewed aggregated on day,week month etc. To get a week view use the cli as below. dfm report view -P 1w storage-system-performance-summary The custom reports have catalogs specific to performance which can be viewed using the dfm report catalog list command. Like dfm report catalog list -A volume will show all fields including performance. dfm report catalog list -P volume will show only performance fields. 2) Is there a way to export graphs in Performance Advisor, I would like to export the graph rather than the raw data Yes use the below cli to do the same. dfm perf view retrieve [ -a <appliance-name-or-id> | -g <group-name-or-id> ] [ -o <perf-object> ] [ -i <perf-instance> ] <view-name> <starttime> <endtime> 3) What export options are available in the Performance Advisor CLI (dfm perf) to export the raw data so I can run a post process in perl to provide the customer an end to end solution You can either use the below cli to export the data dfm perf data export [ -s <start-time> ] [ -e <end-time> ][ -m <maximum-file-size> ] [ <object>[ =<export-file-directory> ] ...] Or take a look at the below TR which give details about exporting PA data. Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export As an additional question, with earlier versions of DFM there was a "Advanced Guide" that provided examples of the CLI functionality. Does this exist with DFM 4.0? USe the man pages which give complete details about each cli and its options. The man can be accessed from the web-ui of operations manager. Contorl-Center->Help->General Help->Man Pages. Regards adai
... View more
If these volumes are part of any dataset you can use the secondary space management feature of DFM 4.0 to migrate a SnapMirror destination volume to a new filer or another aggr. This process is automated and the incoming and outgoing relationships of the migrated volumes are taken cares. Regards adai
... View more
Looks like your aggr1 done have space to guarantee 10g to your volume. Either add more disk to your simulator in turn to your aggr. Else create thin provisioned volume. vol create flexvol1 –s none aggr1 10g Regards adai
... View more
Hi I don’t get your complete problem, But this is how you can get to the role. dfm role operation list DFM.Database.Read Operation Name Synopsis
... View more
Hi Chris, The inodefull and inodenearly full events are generated using the volfull and volnearlyfull threshold values. They don’t have a threshold value of their own. They use the volfull and volnearlyfull threshold values to generate inodefull and inodenearlyfull events. Pls add your customer or case to burt 226960 Regards adai
... View more
Hi Chris Looks like the dfm did not receive the notification from filer on the job completion. Was there any disruption in the network or the filer when this job was running? Can you do a ndmpd status on the filer and see if there are any thing that is active? Regards adai
... View more
Hi avi, The following cli give the list of file type that are recognized in dfm. dfm srm file type list Do a ls –l *. Or dir To find the list of file types in your environment and find out the diff. If there are any new file types that’s available in your environment but not listed in the above cli you can add those file types using the below cli. Below is an example of the same where I have added my name as a file type. dfm srm file type add .adai The FileType: .adai has been added successfully. # Regards adai
... View more
Hi I think your host HQFiler1 is ignored. Go to the NetApp Management Console(NMC)->Data->Unprotected Data->Resources. Then select the column "Ignored" Click on the down arrow and select the option 'All". See if your host is Ignored as shown in the attached Screen shot, if so unignore it and run conformance, to confirm that you dont get the error you mentioned. Regards adai
... View more
Hi Mike, As harish said there is no cli, but this can be done through the api using the DFM SDK. Below is the api for doing the same. <perf-set-default-view> <object-name-or-id>784</object-name-or-id> <view-name>Top Aggregates</view-name> </perf-set-default-view> Let me know if this helps or if you need more help. Regards adai
... View more
Go to the dfm server and execute the following cli for your filers. dfm host diag <filername-or-id> At the end of the output you will find some thing like this. Performance Advisor Checklist perfAdvisorEnabled Passed hostType Passed hostRevision Passed hostLogin Passed perfAdvisorTransport Failed (perfAdvisorTransport set to httpOnly, but host uses https) Go to NMC->setup->Host->Edit and change the "perfAdvisorTransport" to what ever the host uses in this case https. After that you wlll see the status for the datacollection column going to green, it take 15min to start showing data when collecting for the first time.. Regards adai
... View more
The database schema for the views and its relationship is documented in the online help. Which can be accessed from the web-ui as follows. Control Center-> Help-> General help. Regards adai
... View more