Use the Secondary Space Management Feature of DFM 4.0. It will help you move any kind of relationship. But the granularity is at volume level. Only the entire volume can be moved and not individual qtrees. Regards adai
... View more
· How to create user/group/roles to multiple hosts? Use the Host User under Control Center -> Management of OM web UI to create modify and push groups,users & roles to multiple host. Push and monitor /etc/rc, /etc/hosts files? We thought that those or some of the file specify parameters which are not common. If you still need to do that today using the dfm run cmd <cmd > to group. Config Mgmt doesn't do that. regards adai
... View more
As far as I know, they are not, as only volumes options are copied as part of Volume SnapMirror. But CIFS share and NFS exports are stored in registry and file. etc/exports file for NFS and dont know the name of the registry for CIFS. Regards adai
... View more
Yes,All info stored as per the time zone of the dfm server and not as per the timezone of the filer from which it is collected. Regards adai
... View more
Hi Can you retry this entire stuff of creating a user quota full event.And capture the following. dfm event detail <event id> for the user quota full event. and send us the following log files. from the <installdir>/NTAPdfm/log dfmeventd*.log and alert*.log Also dfm host diag for the filer Regards adai
... View more
Reporting, following are the reports of intrest for a V-Series, [root@lnx ~]# dfm report | grep -i array array-luns List of array LUNs spare-array-luns List of spare array LUNs array-luns-aggr summary of array luns attached to aggregates array-luns-aggr-storage-io-load summary of storage i/o load on the aggregate serving the array LUNs. array-lun-config summary of array LUN backend configuration. storage-arrays summary of storage arrays attached to storage systems. storage-array-ports summary of storage array ports connected to storage systems. storage-array-config summary of storage array back end configuration. array-luns-performance-summary performance summary of array LUNs [root@lnx~]# regards adai
... View more
Hi Can you check the following options. See if userEnableAlerts is yes by default its enabled. [root@lnx ~]# dfm options list userEnableAlerts Option Value ---------------- ------------------------------ userEnableAlerts yes The dfm sends the username for which the quota exceeded, its the mailserver that should resolve the domain name.If the mailserver domain name is different set the below option so that dfm sends the username with domain. By default its blank [root@lnx ~]# dfm options list userEmailDefaultDomain Option Value ---------------------- ------------------------------ userEmailDefaultDomain Similarly see if the value set to SMTP server resolves. ping and see if your smtp server is aliases to mail by doing the following. ping mail in the command line else change it to appropriate value. [root@lnx~]# dfm options list SMTPServerName Option Value --------------- ------------------------------ SMTPServerName mail [root@lnx ~]# If the events are generated then there is not other reason than listed above why the alert is not sent to the user. Regards adai
... View more
Either add the existing user to admin group or give more privileges. Or change the runs as user to a one with sufficient privileges. Regards adai
... View more
Shiva, I have to contradict you, the inode usage information is gathered by disk free monitor which runs every 30 mins by default and not the fs monitor. As you said its better to have a considerable difference in the nearly full and full threshold to alert at appropriate time esp in case of indoes as the user might have consumed only 1 inode but could have used by 10% or more of the volume size. Regards adai
... View more
Below is the complete description of the pre and post backup script. The same is available in the man pages which can be accessed from Contorl-Center->Help->General Help->Man pages at the bottom for windows installtion. For linux the same can be accessed at the below location as follows. man -M /opt/NTAPdfm/man dfpm and seach for the below string. PRE- AND POST-PROCESSING SCRIPTS FOR DATA TRANSFERS Users can install a pre- and post-processing script for data transfers. This script will be called before and after the data transfers. The script should return 0 on success and 1-255 on error. An exit code of greater than 255 is not supported by the DataFabric Manager server. If a script returns a non-zero exit code, no data transfer for the dataset will occur. A log entry will be added to the job indicating that the script invocation has failed. If for any reason the data transfer fails after the first script invocation, the script is called as soon as the failure (or abort) occurs. The script can be installed using the "dfpm policy node set policy-name-or-id node-name backupScriptPath=backup-script-path" command. The user as which the script should be invoked can be set using the "dfpm policy node set policy-name-or-id node-name backupScriptRunAs=backup-script-run-as" command. Parameters and status codes are passed to the script as environment variables. The names for these strings begin with DP_; e.g. DP_DATASET_ID The following perl script fragment prints all of the environment variables passed to the script. foreach $key (sort(keys %ENV)) { if ($key =~ /^DP_/) { print "$key: $ENV{$key}\n"; } } For backup connections (snapvault and QSM), an installed pre- and post-processing script will be called upto four times during the course of the backup. It is called once right before taking snapshots on the primaries. It is called again before starting data transfer for backup relationships. It is called third time after data transfer is completed. It is called fourth time after the job has registered a backup in the Protection Manager. The script can tell if it is the first or the second etc. invocation by examining the "DP_BACKUP_STATUS" environment variable. (See the Environment Variables section for details). The time when script should quiesce application using primary data depends on on whether the snapshot capability is available on the primary host. On storage systems where snapshot is available, the script should quiesce application when it is invoked before taking primary snapshots. It can resume normal activi- ties of application when it is invoked for the second time i.e. before transferring data. On Open Systems agents where snapshot capability is not available, the script should quiece application when it is invoked for the second time i.e. before transfer- ring data. It should resume normal operation of application when it is invoked for the third time i.e. after transferring data. For mirror connections (VSM), an installed pre- and post-processing script will be called twice, once before the mirror transfer and once after the mirror transfer. On first invocation, the script should quiece the application and when it is invoked for the second time, it can resume the normal activities of the application. For local snapshot creation, an installed pre- and post-processing script will be called twice, once before the creating the snapshot and once after creating the snap- shot. On first invocation, the script should quiece the application and when it is invoked for the second time, it can resume the normal activities of the application. The following environment variables are passed into the scripts: DP_JOB_ID The data protection job ID. DP_DATASET_ID ID of dataset that is being backed up. DP_DATASET_NAME Name of dataset that is being backed up. DP_POLICY_ID ID of policy associated with the dataset that is being backed up. DP_POLICY_NAME Name of policy associated with the dataset that is being backed up. DP_CONNECTION_ID ID of policy connection associated with backup job. This environment variable is not defined for local snapshot creation. DP_FROM_NODE_NAME Name of policy node from which data is to be transferred. This environment variable is not defined for local snapshot creation. DP_TO_NODE_NAME Name of policy node to which data is to be transferred. This environment variable is not defined for local snapshot creation. DP_BACKUP_STATUS State of the data transfer. Valid values for backup connection are: * DP_BEFORE_PRIMARY_SNAPSHOTS * DP_BEFORE_TRANSFERS * DP_AFTER_TRANSFERS * DP_AFTER_BACKUP_REGISTRATION Valid values for a mirror connection are: * DP_BEFORE_MIRROR_TRANSFERS * DP_AFTER_MIRROR_TRANSFERS Valid values for a local snapshot creation are: * DP_BEFORE_SNAPSHOTS * DP_AFTER_SNAPSHOTS DP_BACKUP_RESULT Result of the backup. Valid values are: * DP_SUCCEEDED * DP_FAILED * DP_ABORTED The result is undefined when the script is invoked before data transfers. DP_RETENTION_TYPE Type of retention used for backup. It is also defined in mirror connections but is only intended for pre/post backup scripts. Valid values are: * DP_HOURLY * DP_DAILY * DP_WEEKLY * DP_MONTHLY * DP_UNLIMITED DP_SERIAL_NUMBER Serial number of the DataFabric Manager installation. Regards adai
... View more
Report schedules work only for future dates. But from the history tables you can go back in date. But they are not point in time as they are conslidated as explained in the below FAQ. https://now.netapp.com/NOW/knowledge/docs/DFM_win/rel40/html/faq/index.shtml#_6.7 Regards adai
... View more
Starting DFM 4.0 we stopped rebaseline from primary if the destination/secondary volume is out of space or out of inodes. The only way forward is the migrate the destination volume using Secondary Space Management, which uses VSM to copy the data from old volume to new volume. Since Ontap does not support VSM between 32bit and 64 bit Aggr SSM throws error that the dst aggr is not suitable for migration. But SV, QSM and NDMPCopy support copying data between 32 and 64 bit aggr. So re-baseline from primary is the way. The only way forward is the remove the 32bit RP from the dataset node. Then set the following options. dfm options set dpReaperCleanupMode= Orphans if its not imported relationships. dfm options set dpReaperCleanupMode=Automatic if its imported relationship Then remove the secondary volume from the dataset. Add the RP containing the 64bit Aggr to the dataset secondary node. This will do a rebaseline from primary to the new volume created in the 64 bit aggr, using SV or QSM as per the protection policy. The reaper process will cleanup the relationship within 2 to 3 hours.So the new updates dont go to the old volume, but those backup version registered with the dataset are available for restore. The advantage is if you need a logner retention then you might be limited by the 255 snapshots and using SSM will not help as that copys the entire volume so the same procedure is required. Another way is to instead of step 3 do the below. Remove the primary from the old dataset. Create a new dataset. Add it to the new dataset. Add the 64bit RP to the destination node of the dataset. By this way you will achieve the same. But you will have to retain the old dataset so that the backup version expire on the old secondary volume.( This will help because once all backup version expire you know which secondary volume to delete.) But if you go with the first approach you will have to make note of all the destination volume names. Another down fall of this approach is conformance will unnecessarily run on the first dataset( we increase the nubmer of dataset count and impact the performance if there a more number of dataset.) Regards adai
... View more
There are events which will let you know when the job completes. dataset-backup:aborted Warning dataset-backup dataset-backup:completed Normal dataset-backup dataset-backup:failed Error dataset-backup Regards adai
... View more
This will be the link for the webui to do the same.(Change it as per your setup for those inside the angle braces) http://<dfmserver-name-or-IP>:8080/dfm/edit/host?object=<filerid>&group=0 The navigation is: Control Center->Storage Controller Details page->Edit settings Attached is the screen shot for thesame. Regards adai
... View more
Hi Erikn, I wrote a doc on how to integrate App dataset using scripts to PM.I have been digging my email and couldnt find that yet. The closest I could get was this. So first step would be to create an app dataset by calling the following ZAPI dataset-create { "dataset-name" => "", "dataset-owner" => "Adaikkappan", "dataset-description" => "My first app dataset", "dataset-contact" => 'adaikkap@netapp.com', "is-application-data" => "true", "requires-non-disruptive-restore" => "true", "application-info" => { "application-name" => "SnapManager for Oracle", "application-version" => "2.1", "application-server-name" => "foobar.lab.netapp.com", "is-application-responsible-for-primary-backup" => "true", }, }; the next step is to get the UTC time stamp of the snapshots taken from SnapCreator. then create a backup version by calling the following ZAPI dp-backup-version-create { # begin backup no 1 "dataset-name-or-id" => "test)", # dfm app dataset name or object id_demo "backup-description" => "thisis for testing the sv protoclrestore ", "is-for-propagation" => "true", # true / false----------------------------making this true will make this backup version available at the downstream node when remote backup job runs. else available only at the primary. "retention-type" => "hourly", # hourly, weekly, daily, monthly, unlimited------------ since the protection policy has backup retention count and duration based on the setting in the policy you need to specify the type of retention. "version-timestamp" => time, # be sure to add +1 to all subsequent backup versions. Otherwise, do not edit. "version-members" => { "version-member-info" => [ { # snapshot number 1 "volume-id" => "140", # dfm object id for volume "snapshot-name" => "first", # name of this snapshot "snapshot-unique-id" => "1202211537", # use get_ss_access_time.pl script to obtain "snapshot-contents" => { "snapshot-member-info" => [ # list of qtrees associated with this snapshot { "primary-id" => "143", # dfm object id for qtree no 1 }, { "primary-id" => "4355", # dfm object id for qtree no 2 }, { "primary-id" => "4356" # dfm object id for qtree no 3 } ] } }, ] } }, # end backup no 1 The above zapi that needs to be filled for single snapshot. if you wish to do multiple snapshot to same backup version then you have to repeat this block alone, with the other snapshot. "version-members" => { "version-member-info" => [ { # snapshot number 1 "volume-id" => "140", # dfm object id for volume "snapshot-name" => "first", # name of this snapshot "snapshot-unique-id" => "1202211537", # use get_ss_access_time.pl script to obtain "snapshot-contents" => { "snapshot-member-info" => [ # list of qtrees associated with this snapshot { "primary-id" => "143", # dfm object id for qtree no 1 }, { "primary-id" => "4355", # dfm object id for qtree no 2 }, { "primary-id" => "4356" # dfm object id for qtree no 3 } Regards adai
... View more