Are you trying to migrate the vFiler using DataMotion ? or Migrate volumes using the secondary space management feature of dfm 4.0 From the error it looks like DataMotion of vfilers.Can you get the version of dfm you are running and the OS on which its running. Also the output of dfm options list ? Regards adai
... View more
You have two options. Run dfm run cmd <filer id/name> snaplist <volume name>from OM to get the exact output as what you get from cli. run the cli dfpm dataset snapshot list <volume name/id> it is not necessafy that the volume is part of any dataset. The output generate by the above cli is like this. [root@lnx ~]# dfpm dataset snapshot list 117 Id Name Unique Id Volume Timestamp Versioned Dependencies % of Total Blocks ---- --------------- ------------ --------------- -------------------- ---------- --------------- ---------- 11019504 hourly.5 1301567446 f2020-:/vol0 31 Mar 2011 16:00:46 No None 1% (0%) 11019503 hourly.4 1301581831 f2020:/vol0 31 Mar 2011 20:00:31 No None 1% (0%) 11100778 nightly.1 1301596216 f2020-:/vol0 01 Apr 2011 00:00:16 No None 1% (0%) 11019501 hourly.3 1301625001 f2020:/vol0 01 Apr 2011 08:00:01 No None 0% (0%) 11019500 hourly.2 1301639446 f2020:/vol0 01 Apr 2011 12:00:46 No None 0% (0%) 11019499 hourly.1 1301653846 f2020:/vol0 01 Apr 2011 16:00:46 No None 0% (0%) 11019498 hourly.0 1301668231 f2020:/vol0 01 Apr 2011 20:00:31 No None 0% (0%) 11100773 nightly.0 1301682617 f2020:/vol0 02 Apr 2011 00:00:17 No None 0% (0%) [root@lnx ~]# Regards adai
... View more
Hi Earls, If you are interested I can share couple of more scripts that we generated using script-plugin and custom comment field. Regards adai
... View more
This is because in your old dfm server you have set it to a specify name. By default the option is empty and pick up the hostname.If you set it to a specify value then that is used. Below is the example. [root@lnx ~]# dfm options list | grep -i localHostName localHostName [root@lnx ~]# [root@lnx~]# dfm options list localHostName Option Value --------------- ------------------------------ localHostName [root@lnx~]# This wouldnt have been in your case.Hope it explains why . Regards adai
... View more
Do upgrade to 4.0.1D2 or latter if you wish to be in 4.0.1 codeline.Else 4.0D23 or latter if you wish to be in 4.0 codeline. Though my personal suggestion is to upgrade to 4.0.1 codeline. Regards adai
... View more
Provisioning manger would do that check for you and disallow if its not applicable. AFAIK FAS to FAS and Vseries to Vseries is supported, I dont see a reason why FAS to Vseries shouldn't be. regards adai
... View more
Hi Earls, DFM today does not have a report that give the efficency due to usage of clones. Below is a report that generates the same.Read the readme.doc first to understand what it is and how to install. Regards adai
... View more
Is the size of the disk 2TB or more ? If so you may be a victim of bug 429510. Refer the NOW public report below. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=429510 Regards adai
... View more
Did you look through the db schema for the views that's exposed, which are documented in the man pages which can be accessed from the help general help from the webui? Regards adai
... View more
Most of the cli in Ops-Mgr, like dfm--data fabric manager dfpm--datafabric prot/prov manager dfbm-- datafabric backup manager dfdrm--datafabric disaster recovery manager Use the below cli to get it in table form. dfbm report -F text primary-hosts-open-system some most of these output formats. OUTPUT FORMATS The list and report commands generate output in one of several formats based on the -F option: text For display on a terminal screen in a tabulated format. For example, Primary Directory Secondary Volume Lag Status ------------------------ -------------------- -------- ------ filer1:/vol1/qt3 bigdog:/vol1 1.2 h Idle vfiler1:/vol1/qt4 bigdog:/vol1 10 h Idle If -q is specified, the column headings are omitted. paragraph For display on a terminal screen in a paragraph style. For example, Secondary Volume: bigdog:/vol1 Schedule: 4 times a day Retention: Legal data Primary Directory: filer1:/vol1/qt3 Primary Directory: filer2:/vol1/qt4 Primary Directory: filer4:/vol1/qt2 Primary Directory: vfiler1:/vol1/qt6 Secondary Volume: bigdog:/backups Schedule: 6 times a day Retention: Sensitive data Primary Directory: filer2:/vol2/qt3 html For display in a web page. perl For processing with a perl script. xml For processing with XML-capatable software. xls For viewing in a spreadsheet, such as Microsoft Excel. Regards adai
... View more
As long as the cli output for dfm volume list -a and dfm qtree list -a , show such volume or qtrees names,though they are not in the filer. Prov Mgr will not be reap the names and use the missing suffix, instead it always increase the suffix number. As lovik said you can make it to use the missing ones only if you delete them from dfm db. Regards adai
... View more
Use the dfbm cli, as below.And run the report as shown. [root@lnx~]# dfbm report Available reports are backups-by-primary backup relationships keyed by primary directory backups-by-secondary backup relationships keyed by secondary volume events all current backup events events-error current backup error or worse events events-unack unacknowledged events events-warning current backup warning or worse events jobs all backup jobs jobs-1d backup jobs started today jobs-30d backup jobs started this month jobs-7d backup jobs started this week jobs-aborted aborted backup jobs jobs-aborting backup jobs being aborted jobs-completed completed backup jobs jobs-failed failed backup jobs jobs-running Displays a list of all in progress backup jobs. primary-dirs all backup primary directories primary-dirs-discovered all backup primary directories discovered primary-dirs-qtrees-discovered Displays a list of all unprotected qtrees. primary-hosts all primary storage systems primary-hosts-filers Primary storage systems running Data ONTAP primary-hosts-open-system OSSV primary storage systems schedules all backup schedules secondary-hosts all secondary storage systems secondary-volumes all backup secondary volumes summary-completed all completed backups summary-failed all failed backup jobs summary-inprogress all in progress backup jobs summary-no-status backups with no status unauthenticated-systems storage systems with no NDMP credentials unavailable-agents storage systems with NDMP unavailable [root@lnx ~]# dfbm report primary-hosts-open-system There are no backup primary storage systems. [root@lnx ~]# Regards adai
... View more
Is the event getting generated.? Can you get the events list using the cli dfm report view events-perf ? Are you seeing any error in the alert.log in the dfm install dir under log folder ? Regards adai
... View more
Did you try all this provisioning job with in an hour ?Because, DFM takes atleast one hour to mark the qtrees as deleted though it may not be in the filer. regards adai
... View more
Did you take a look at the below FAQ on what port need to be opened ? http://now.netapp.com/NOW/knowledge/docs/DFM_win/rel40/html/faq/index.shtml#_3.14 Regards adai
... View more
Do you have a Resource pool with Aggr in them added to the secondary of the dataset. Make sure you add the aggr where you are planning to move. All use the cli dfpm migrate volume to get more detail message if you hit any issues. regards adai
... View more
Let me try to answers all of the below for the convenience of all. autoClientStatEnabled enable automatic collection of per-client statistics when a set of thresholds is breached. This options was introduced in DFM 4.0 as part of the Performance Advisor for Rouge client detection for CIFS and NFS protocol. clientStatMinTotalOpsRate Specifies the minimum number of total operations performed on a storage system per second, for per-client statistics to be automatically collected from it. This options was also introduced in DFM 4.0 as part of the Performance Advisor for Rouge client detection for CIFS and NFS protocol. clusterMonInterval time interval defining how often monitor tries to discover new clusters(Ontap 8 C-Mode) currentEventsCacheSize The cache size for currents events in dfm. databaseBackupDbengWaitTime AFAIK its the time for which the dfm wait for sybase to start before it times out to start the dfm sql service.By default this value is 600 sec. growthRateSensitivity indicate the sensitivity of the volume to growth rate changes. valid range is (0-5]. 5 : not very sensitive to growth rate changes. Only very large deviations generate the 'abnormal' event. low values mean very sensitive to growth rate changes. Small deviations also generate the 'abnormal' event. default value will be 2. This options is used in generating volume growth rate abnormal event. hostEnclosureDiscoveryEvents It now suppress two events from being generated which are most of the time false events enclosures-dissapeared Warning env.encl.dissapeared enclosures-found Normal env.encl.found It can be set to 3 values. Enabled: Enclosure discovery events are enabled for all hosts DisabledForCluster: Enclosure discovery events are disabled for clustered filers. They are enabled for all other hosts. This is default in 3.2. Disabled: Enclosure discovery events are disabled for all hosts. As of today in 4.0.1 by default this is disabled. processHostPrimaryAddress Action to take when the primary IP address of the storage system in DataFabric Managerâs database is different from the IP address given by the DNS. The options are off, warn, error If the option is off, check for IP address mis-match. If the option is warn, check for IP address mis-match and give a warning if the address do not match. If the option is error, give an error if the IP address do not match. The default value of the option is warn. This used in Protection manger during backup and restore. processOSSVPrimaryAddress Action to take when the primary IP address of an OSSV in DataFabric Managerâs database is different from the IP address given by theDNS. The options are off, warn, error, update If the option is off, check for IP address mis-match. If the option is warn, check for IP address mis-match and give a warning if the address do not match. If the option is error, give an error if the IP address do not match. If the option is update, update the hostPrimaryAddress if it is different from the IP address that we get from the DNS. The default value of the option is warn. qtreeAutoDiscovery This options applies to discovery of qtree in DOT systems runnuing 6.3 or earlier only. If this is enabled the qtrees are auto discovered else they need to be discovered forcefully using dfm host discover command or refresh from webui. shareMonInterval time interval defining how often we collect CIFS shares and NFS export information from the managed hosts. snapvaultMonInterval time interval defining how often we collect SnapVault status information from the managed hosts. vserverMonInterval time interval defining how often monitor tries to discover/Montior new/Existing vServer (Ontap 8 C-Mode) vFilerRootVolumeSizeMb determines the default root volume size of a vfiler created by Provisioning manger. Regards adai
... View more
As of today PM only supports async SnapMirror. To know when will PM support Sync SnapMirror talk to your SE, who can talk on your behalf to the product Manager. There is a component called Disaster Recover Manager in DFM WebUI which supports creating all kinds of snapmirror(Sync/Semi/Async) Regards adai
... View more
Hi Morse, The details schema for both historic and non historic data is documented in the general help. Go to the operation manger WebUI Control Center-> Help-> General Help-> Contents->Database Schema. Under this you can find the following. Database schema for DataFabric Manager nonhistoric data Database schema for DataFabric Manager historic data Relationship among fields of various database views So the "Relationship among fields of various database views" give the relationship between various views. For convenience I am giving the URL, replace it your dfm server IP http://<dfmserverIPorHostname>:8080/help/dfm.htm#%3E%3Epan=2 Regards adai
... View more
Filer1: Failed to create SnapMirror relationship between FIler1:/dataset_006_backup_4 and Filer2:/dataset_006_backup_4. Reason: Unable to update the SnapMirror relationship between 'Filer1:/dataset_006_backup_4'(33610) and 'Filer2:/dataset_006_backup_4'(55299). Reason: Snapmirror error: transfer from source not possible; snapmirror may be misconfigured, the source volume may be busy or unavailable These are error message coming directly from filers. Will have to take a closer look. The other 2 volumes are still in a SM state and lagging about 20hours... I'm not sure what happened to dataset_006_backup_4 so, I started another migration of just this volume and manually update the SM for the other 2 volumes. As it stand right now, the other 2 volumes show, the Job Progress state 'Waiting for job <ID> to finish.'. I have to assume this is waiting to complete the quiesce/break of the SM. IIRC at any point in time there can be only one job running for a given node of a dataset.The backup jobs will wait for the current running job depending upon the state at which the previous jobs is. BTW are the jobs that are waiting are they ? Volume Migration jobs are update jobs ? Regards adai
... View more
As long as DFM is able to talk, it should be able to manage. Set the IP address of e0P, using the following cli. dfm host set <filer id/name> hostPrimaryAddress=e0P. Regards adai
... View more