Are you trying to do SnapMirror or SnapVault. BTW what do you mean by DS14 ? Mirror or Vaulting between Volume within same controller or across controller is supported. Regards adai
... View more
Did you see the protection status as "Baseline Failure" after import ? If yes, then this is a known issue. When you import the relationship into a new dataset, it will show an error status of "Baseline Failure". Simply run an on-demand backup job and it will clear this error. Note: The backup job doesn't perform a re-baseline. It simply does a Snapvault/SnapMirror update as the case may be. Regards adai
... View more
If the volumes are empty volumes, its better to size the Mirror destination volumes to smaller size, as most of the resources are spent in scanning. Its better to size it appropriately if the volume used size is little or 1/10 of the Aggr size. Regards adai
... View more
Hi Earls, Here is what you need to do in Protection Manager Step 1:Prevents PM’s reaper cleaning up any relationship. Set the following options as below before doing the following and reset it back to orphans once done. dfm options set dpReaperCleanupMode=Never Step 2:Relinquish the Primary member and the secondary member. · Use the dfpm dataset relinquish or NMC UI Edit Dataset Wizard. Step3:Discovering as External Relationships. · You must see this relationship as external in the External Relationship tab. If you don’t see it, close and re-login to NMC again, Step4:Importing to a new dataset. · Create a new dataset with required policy and schedule. Or choose the dataset where you want to import this relationship too. · Use the Import wizard and import them. Step 5: · dfm options set dpReaperCleanupMode=orphans. Points to take care: 1. If an entire OSSV host was added as a primary member, and now moved to a new dataset.(the step 2, relinquishing the primary member needs to be done for each dir/mntpath of the OSSV host.) 2. After importing the dynamic referencing of the OSSV host is lost as we import each individual relationships. 3. So when a new dir/mnt path is added to the OSSSV host, admin has to manually add it to the dataset. 4. To restore from Old backup version the use must go back to the old dataset as they are not moved over. Regards adai
... View more
Abishek, I have to totally contradict you.The CPU stats is collected in OM every 5 mins by default. [root@lnx ~]# dfm options list | grep -i cpu | grep -v clien cpuBusyThresholdInterval 15 minutes cpuMonInterval 5 minutes cpuTooBusyThreshold 95 [root@lnx ~]# What I have shown is the global option, it can be customized per filer also using dfm options set cli (except for the monitoring interval) [root@lnx ~]# dfm host set -q Valid options are Output stripped for sake of brevity. cpuTooBusyThreshold Host CPU Too Busy Threshold (%) cpuBusyThresholdInterval Host CPU Busy Threshold Interval So OM collects stats every 5 mins, but generating the event for cpuTooBusyThreshold happens only if the value for this options stays for the time interval specified in cpuBusyThresholdInterval. So for example if my value for cpuTooBusyThreshold=95 and cpuMonInterval=5min(the deafult value), if at a sampling time the cpuTooBusyThreshold crosses the value of 95 and only if it stays for 15 mins (for the next two samples) as new sample is collected every 5 mins only then the event is generated.Else its not. So the cpuBusyThresholdInterval basically tries to eliminate alerts being generated on spike instead only when it stay for a longer time. BTW, none of the options are hardcoded. All are customizable using the dfm option set cli. Both at global and host level except for cpuMonInterval which applies only at global level. Hope this helps. The reason for flattening is due to the consolidation which is explained in my other post. Regards adai
... View more
Can you give the output of the entire job list ? dfpm job detail <jobid> for the job that threw this error. Also what is the version of OSSV you are using ? Was the relationship created using PM or was it imported ? Regards adai
... View more
The PA counter match the ASUP Counter Manger becasue they both get the data from Ontap Counter Manager. Also its for individual CPU.Where as the DFM CPU data is for all CPUs on the Also the CPU graphs in OM are consolidated depending upon the graph you are looking at. For each database table, the Operations Manager server saves sample values for periods of the following duration: Each daily history sample covers 15 minutes. Each weekly history sample covers two hours. Each monthly history sample covers eight hours. Each quarterly history sample covers one day. Each yearly history sample covers four days. Regards adai
... View more
As you rightly said, the global system status comes from the OID's you mentioned, directly from filer. DFM thresholds apply on the value of the disk usage collected by dfm and stored in the dfm database(sybase). So there is no connection between these two, they are two independant values. BTW the global system-status is just not the volumes going over 98% usage.There is more to it. Regards adai
... View more
Hi Kishore, Can you explain it with examples ? I feel it’s a bug. Say I have ops/sec in Y-Axis. With values varying from 0 to 1000. By default the auto-scale is enabled which shows the scale of Y from 0 -1000. What will happen if I uncheck it ? Regards adai
... View more
You will have to move the relationship out of the Dataset.Then do the step you mentioned in the filer using cli. Once all the relationship are done(including the restart of the SV relationship), import them from the external relationship. Till date there is no way in teh product to move primary volumes that are managed in PM in a seamless way without modifying dataset. Regards adai
... View more
You must add the aggr to a resource pool and attach it to the node of the dataset to which the volume you plan to migrate belongs too. For example if you have a dataset with backup policy you must attach the aggr to a RP and attach it to the backup node of the dataset. Secondary Space Management, which migrates individual volume, which are part of dataset. The rule or conditions that needs to be met for a volume to be migrate capable is the following. SSM doesn't migrate the following. Root Volume of a filer or vfiler is not migration capable. Volumes with client facing protocols like CIFS, NFS, iSCSI,FCP. Volume which are parents of flexclones. Volume which have unmanaged relationships. If there are client facing protocols.You must remove the client facing protocols you will be able to migrate using SSM, but during the entire duration of initial baseline from old destination to new destination. After the inital baseline SSM will modify the relationship, you dont have to rebaseline the downstream relationship and also wont loose any already registered backup. Regards adai
... View more
Thanks hari, for confirming on the views part I tried, but it returned no data. But where as the reports that I pasted do have the data for the deleted ones. Regards adai
... View more
BTW is there a reason in looking for deleted aggr capacity ? I created an Aggr report with the below fields and I see the capacity of the deleted objects. [root@lnx ~]# dfm report 41 help Warning: Use of this command for listing and viewing reports has been deprecated by 'dfm report list' and 'dfm report view' commands respectively. Deleted Aggr Report (Deleted Aggr) Catalog Name: Aggregate Display Tab: Aggregates Catalog Field Field Name Format --------------------------------------------- --------------------------------------------- --------- Aggregate.DeletedWhen Deleted Date DD/MM/YY 24H Aggregate.Name AggrName Aggregate.SpaceAvailable AggreAvail Auto-Scaled Aggregate.TotalSpace AggrTotal Auto-Scaled Aggregate.Used AggrUsed Auto-Scaled Default sort order is Aggregate.DeletedWhen. [root@lnx ~]# Deleted Date (DD/MM/YY 24H) AggrName AggreAvail AggrTotal AggrUsed --------------------------- -------------------------------------------- ----------- ----------- ----------- 29/03/11 16:20:58 aggr0-2011-03-29 16:20:58.000-1 41.8 GB 114 GB 71.7 GB 29/03/11 17:28:32 aggr1 99.3 GB 114 GB 14.2 GB 05/04/11 08:36:20 AAutoSM_aggr0_edit-2011-04-05 14:54:23.000-1 0 bytes 0 bytes 0 bytes 05/04/11 16:28:50 AAutoSM_aggr0_edit 0 bytes 0 bytes 0 bytes Regards adai
... View more
Hi Erik, Today the product doesn't allow creation of volumes for NAS with exports. It always creates qtree with/without exports. One way I can think to achieve this is by using a SAN policy(volume creation) And adding the exports or shares for the volume using post provisioning scripts. But the volume properties will be optimized for a SAN container. Otherwise set the max qtrees per volume as 1 and create a qtree with as large as the required volume size. Latter in the post provisioning script delete the qtree its quota,share/export and then add export or share for the volume. I know it a round about way, but this is what I can think off to achieve this. Just curious as to why you want NAS volume provisioning instead of qtree ? Regards adai
... View more
As I suspected its due to the over commitment, The error. Aggregate 'UNPIOX80PN:aggr019'(3423): Used space: 5.82 TB Total capacity: 12.2 TB Committed size: 12.0 TB The committed size on the aggr is beyond Aggregate Nearly Overcommitted Threshold (%): 95 Either increase the over commitment or add disk to the aggr. regards adai
... View more
Did you try using the deleted fields in the custom reports ? Like the ones below. [root@lnx ~]# dfm report catalog list -R aggregate | grep -i delete DeletedWhen Aggregate Deleted When DD MMM 24H DeletedBy Aggregate Deleted By SnapshotAutoDelete Aggregate Snapshot Autodelete [root@lnx1 ~]# dfm report catalog list -R volume | grep -i delete DeletedWhen Volume Time of Deletion DD MMM 24H DeletedBy Volume Deleted By [root@lnx ~]# dfm report catalog list -R volume | grep -i delete DeletedWhen Volume Time of Deletion DD MMM 24H DeletedBy Volume Deleted By [root@lnx ~]# Or when you schedule a report you can include the filed called show deleted objects. Attached is the screenshot on the same. Regards adai
... View more
The Migration Capable is for the Vfiler, are you trying to migrate entire vfiler ? Or just the secondary volumes ? If secondary volumes use the Host-.Aggregates-> Secondary Space Management to move the backup or mirror volumes. If its primary and are from vfilers use Online/Offline Vfiler migration. Regards adai
... View more
Also vFiler DataMotion of secondary or DR vFiler unit is not supported. Can you get the output of the following. dfm aggr get 3243. Regards adai
... View more