BTW I strongly recommend you to use OC 5.0.1 as it a GA candidate of OC 5.0 Regards adai Note: Though this does not solve your problem of snaplock being unsupported. Also my initial hunch of filer overloaded is not the cause here as its more a snaplock and dedupe combination than load. Regards adai
... View more
Hi Shingo-san, If you notice in DFM/OC the capacity utilization of volume/aggr is done once every 30mins by default by dfmon monitor. But our history graphs have a consolidation mechanism for each history table like week, month etc. This section of the doc explains how history data is accumated in database for monitors which are more than 15 minutes( under which category we fall now) CASE 3: Monitoring interval more than 15 mins Sample Regards adai
... View more
Snapvault of snap locked enabled volumes are not supported in Protection Manager. You are trying to run a unsupported configuration. Regards adai
... View more
Hi Pedro, First let me tell you how to make this work. Later explain you the same. Turn this option value to OFF. interface.blocked.mgmt_data_traffic off. This mean that we are allowing data traffice through e0m. That way DFM is happy and NDMP status would be shown as up and good. As DFM always talks to one interface it checks for the NDMP service also in the same interface. So to make it happy we are enabling the option by setting its value to OFF, which mean allow data traffic. But in the background we are specifying the options ndmpd.preferred_interface vif_nas-48 which is sitting on a interface other than e0m. This way the data traffic is not happening through e0m instead through the vif. Note for SV you will have to use ndmpd.preferred_interface on the controller. For QSM and VSM you will have to use dfm host set <hostid> hostPreferredAddr1=<IP address of interface other than e0m> on both your source and destination filer. PM will create a connection entry for VSM and QSM in the snapmirror.conf entry of the filer and route the traffic through those interfaces and not through e0m. Hope this helps and solves your issue too. Regards adai
... View more
I think the culprit here is snaplock. Protection Manger does not support snapvault of volumes which are snaplock enabled. Is this happening in other volumes which are dedupe enabled but not snaplocked as well ? Then we will have to dig deeper. Regards adai
... View more
This basically means that your controller is overloaded. Pls take a look at the below public report. http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=420257 There is a workaround in the NOTES section, which I am copy pasting. DFM 4.0D9 provides a workaround for ONTAP burt 354566. When the default of 3 retries and 60 seconds between the retires in the current DFM code is not enough, use these two hidden options in DFM 4.0D9 to allow users to set the retries and time period between the retries to allow for getting creation time of the snapshot. Set hidden options from DFM command line: dfm option set dpMaxSnapshotListingRetries=<retries> dfm option set dpSnapshotListingRetryInterval=<time period between retries> But there is no guarntee that it will solve the problem. It will only make DFM/Protection Manger wait longer before it timesout with the error you posted. Regards adai
... View more
Yes there is a problem in the version of ONTAP for ndmpl kernal thread leaks. Below is the link to the NOW public report. http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=354933 I want to know your ONTAP version before I could reference this bug. Regards adai
... View more
HI Shingo, Pls take a look at this doc, which explains how history day is stored. To answer you question, the sampling is fixed for each history table and cant be changed. Storage Capacity Management using OnCommand Operations Manager Below is the snippet of info which is of interest to you. Purging of Older Samples from History Tables: To keep the database size under control, samples from each of the history tables are purged when they get old. A maximum of 150 samples are kept in each sample history table for one storage object, which translates into: • 37.5 hours in daily sample table • 12.5 days in weekly sample table • 50 days in monthly sample table • 5 months in quarterly sample table. • Samples in yearly sample table are never purged. Operations Manager UI does not provide graphs that span longer than a year; “dfm graph” cli can be used to get older data from the yearly sample table. Regards adai
... View more
Snapshot space can only be used for taking snapshot and cannot be used for user data writes or snapvault update. You will need to have data space in your volume to succeed the snapvault update. Regards adai
... View more
Hi Markus, Did you try the default roles ? Below is the global provisioning role. [root@vmlnx ~]# dfm role list -x GlobalProvisioning Role-name: GlobalProvisioning Role-id: 27 Description: Provisioning of Datasets Inherited Roles: GlobalRead GlobalDataSet GlobalResourceControl Capabilities: Res Id Resource Name Operation ------- ----------------------------------- ----------------------------------- 0 Global DFM.Database.Read 0 Global DFM.BackupManager.Read 0 Global DFM.Mirror.Read 0 Global DFM.Event.Read 0 Global DFM.ConfigManagement.Read 0 Global DFM.Policy.Read 0 Global DFM.Core.AccessCheck 0 Global DFM.Schedule.Read 0 Global DFM.Report.Read 0 Global DFM.Alarm.Read 0 Global DFM.PerfThreshTemplate.Read 0 Global DFM.StorageService.Read 0 Global DFM.ApplicationPolicy.Read 0 Global DFM.DataSet.Write 0 Global DFM.DataSet.Create 0 Global DFM.DataSet.Delete 0 Global DFM.Resource.Control 0 Global DFM.ResourcePool.Provision [root@vmlnx ~]# Global Restore. [root@vmlnx ~]# dfm role list -x GlobalRestore Role-name: GlobalRestore Role-id: 7 Description: Perform restore operations from backups Inherited Roles: None Capabilities: Res Id Resource Name Operation ------- ----------------------------------- ----------------------------------- 0 Global DFM.BackupManager.Restore 0 Global DFM.BackupManager.RestoreFromSecondary 0 Global DFM.BackupManager.Read [root@vmlnx ~]#
... View more
Hi Nd, Pls take a look at this thread. How to Remove TimeZone in SnapShot Name timestamp attribute in snapshot name is mandatory. Regards adai
... View more
Hi Joyce, Looks like there is no way in the current product to make them listen in fixed ports. For a detailed list of port used by DFM pls take at look at the below FAQ link. https://library.netapp.com/ecmdocs/ECMM1278650/html/faq/index.shtml#_3.14 Regards adai
... View more
I thought NMDP goes through ethernet. If you want to just know the HBA port througput use the following graph. fc-bytes bytes read from and written to the FC network per second Regards adai
... View more
Hi Mauro, What version of ONTAP is the controller running ? This error message is directly coming from the controller. Looks like your controller is out of ndmp kernel threads. A reboot of the controller would this issue. Regards adai
... View more
Hi Thomas, I tried the same and I am able to do it without this error in case of vfiler volumes and snapshots.Below is my snapshot name. fas-sim-1*> vfiler run vFiler_Src snap list vfilerSnapshotName ===== vFiler_Src Volume vfilerSnapshotName working... %/used %/total date name ---------- ---------- ------------ -------- 1% ( 1%) 0% ( 0%) Jun 03 14:02 dfpm_base(vfilerSnapshotName.474)conn1.0 (snapvault,acs) 2% ( 1%) 0% ( 0%) Jun 03 14:02 2012-06-09_0309+0530_daily_vfilerSnapshotName_vFiler_Src_vfilerSnapshotName_.-.wfa_treeq 3% ( 1%) 1% ( 0%) Jun 03 13:53 2012-06-09_0300+0530_hourly_vfilerSnapshotName_vFiler_Src_vfilerSnapshotName_.-.wfa_treeq fas-sim-1*> Whats interesting in your snapshot name is that it doesnt have a + 2012-06-06_2000 0200_daily Tried doing the same to my snapshot name and am able to hit the error you mentioned. 2012-06-09_0309 0530_daily_vfilerSnapshotName_vFiler_Src_vfilerSnapshotName_.-.wfa_treeq Regards adai
... View more
I think in your case you have your dfm installed in non-default locaiton. ie is not "C:/Program Files/NetApp/DataFabric Manager/DFM" Regards adai
... View more
Hi Thomas, Can you paste a screen shot ? As I am little confused of what you refer to by Operations Manager. Is it the web UI ? Or the JAVA Thick client NMC ? Regards adai
... View more
Hi Pedro, You will have to do this. options interface.blocked.mgmt_data_traffic YES. This way NDMP status will be up but data traffic will not be routed using e0m as we are setting the preferred interface for SV/QSM/VSM For SV relationships: Setup the following option on the filer. options ndmpd.preferred_interface <interface_name> For QSM and VSM do the following. Set the hostPreferredAddr1 on the source and destination filer with data mgmt IP addres, using the following cli dfm host set<filer-id> hostPreferredAddr1=data_mgmt_ip_address BTW for QSM and VSM we will create a connection entry with the following IP address on the source and destination. So that DFM/OC uses the e0m for all SNMP and API traffic for monitoring, and uses the datamgmt IP address specified to run backup/mirror. Regards adai
... View more
Hi Rick, I dont know if there is any correlation between what you are saying. Is trap alerts that you are talking about ? or Normal event alerts ? Regards adai
... View more
Hi, In true sense its not protecting any data. As the aggr hosting the source and destination volumes fail all data is lost. Also it gives false impression to customer so we prevent SV relationship within the same aggr to avoid giving users a false impression that their data is protected from an aggr failure. But I can argue as follow.s True from a physical perspective but a separate copy also protects from logical corruption, so there is value in this, albeit not optimal.The customers view is they have limited resources and would rather ace some protection than none at all,also they are able to set this up from the CLI so would like our management tools not to place restrictions on how they use our products" BTW why do you want to do this ? Regards adai
... View more
Hi Keith, In case of DFM/OnCommand the history data capacity is stored in db and displayed via graphs as below Also more details on historic data and graph is available in the below link Storage Capacity Management using OnCommand Operations Manager Regards adai
... View more
Hi Chris, Following are the details on each field. Space Saving:Actual space saved due to dedupe. Displays the savings achieved through deduplication. Physical Used:Displays the active file system data of all the deduplication-enabled volumes in the aggregate. Effective Used:Displays the active file system data of all the deduplicated volumes in the aggregate without deduplication space savings. Total Deduped Space: Is not documented at all. looking at the example this is what I assume. Total Deduped Space:Sum of the total capacity of all aggr where there is atlest one volume with dedupe enabled. Regards adai
... View more