Data Backup and Recovery
Data Backup and Recovery
Dear all
I have snapcreator version 3.4 and want to run snapcreator for oracle.After I configure profile finish then I test run
./snapcreator --config TMAS_QMAS --profile Container --action quiesce
but database have process is end backup
bash-3.00$ tail -f alert_TMAS.log
ALTER SYSTEM ARCHIVE LOG
Fri Jun 01 15:45:00 2012
Thread 1 cannot allocate new log, sequence 444
Private strand flush not complete
Current log# 2 seq# 443 mem# 0: /ou52/oradata/TMAS/redo02a.log
Current log# 2 seq# 443 mem# 1: /ou53/oradata/TMAS/redo02b.log
Thread 1 advanced to log sequence 444 (LGWR switch)
Current log# 3 seq# 444 mem# 0: /ou52/oradata/TMAS/redo03a.log
Current log# 3 seq# 444 mem# 1: /ou53/oradata/TMAS/redo03b.log
Archived Log entry 439 added for thread 1 sequence 443 ID 0x188e2f25 dest 1:
Fri Jun 01 16:08:29 2012
alter database end backup
WARNING: datafile #1 was not in online backup mode
WARNING: datafile #2 was not in online backup mode
WARNING: datafile #3 was not in online backup mode
WARNING: datafile #4 was not in online backup mode
WARNING: datafile #5 was not in online backup mode
WARNING: datafile #6 was not in online backup mode
WARNING: datafile #7 was not in online backup mode
WARNING: datafile #8 was not in online backup mode
ORA-1142 signalled during: alter database end backup...
ALTER SYSTEM ARCHIVE LOG
Fri Jun 01 16:08:29 2012
Thread 1 cannot allocate new log, sequence 445
Private strand flush not complete
Current log# 3 seq# 444 mem# 0: /ou52/oradata/TMAS/redo03a.log
Current log# 3 seq# 444 mem# 1: /ou53/oradata/TMAS/redo03b.log
Thread 1 advanced to log sequence 445 (LGWR switch)
Current log# 1 seq# 445 mem# 0: /ou52/oradata/TMAS/redo01a.log
Current log# 1 seq# 445 mem# 1: /ou53/oradata/TMAS/redo01b.log
Archived Log entry 440 added for thread 1 sequence 444 ID 0x188e2f25 dest 1:
I do not know to fix this problem , please help me to fix this problem .
regard's
Pinyapatthara
First you should always run with --verbose so you see what SC says and ideally --debug if you are troubleshooting.
Next SC requires Oracle to be in Archive Log mode, so I would check that. If you are getting an error from SC please post it, otherwise this seems like an Oracle issue. If you run quiesce operation in debug mode you will see the sqlplus commands SC wants to run and you can run them by hand to troubleshoot further.
Regards,
Keith
Lastly quiesce simply puts database into backup mode, that is it. It is really only for testing. The action snap will take a backup. Please read the Install / Admin Guide for more info regarding usage.
Keith
HI Keith Tenzer
This's verbose log on SCAgent . And database is in Archive Log mode.
bash-3.00# /opt/NTAP/scAgent3.4.0/snapcreator --start-agent 9090 --verbose --debug
[Fri Jun 1 16:51:20 2012] INFO: Starting NetApp Snap Creator Framework Agent [single-threaded] in Debug Mode
[Fri Jun 1 16:51:20 2012] DEBUG: Listening on provided port: 9090
[Fri Jun 1 16:51:20 2012] INFO: NetApp Snap Creator Framework Agent [single-threaded], running with pid 5463, is listening on port 9090 of all configured network interfaces
[Fri Jun 1 16:51:42 2012] DEBUG: Reloading configuration from /opt/NTAP/scAgent3.4.0/config/agent.conf
[Fri Jun 1 16:51:42 2012] DEBUG: Reloading configuration finished with
exit code: [0]
stdout: []
stderr: []
[Fri Jun 1 16:51:42 2012] DEBUG: 10.1.5.41 is allowed to send requests
[Fri Jun 1 16:51:42 2012] INFO: Authorized request from [scServer@*]
[Fri Jun 1 16:51:42 2012] INFO: Authorized request from [scServer@*]
[Fri Jun 1 16:51:42 2012] INFO: Quiescing databases
[Fri Jun 1 16:51:42 2012] INFO: Quiescing database TMAS
[Fri Jun 1 16:51:42 2012] DEBUG: Verifying correct version of database TMAS
[Fri Jun 1 16:51:42 2012] DEBUG: Executing SQL sequence:
connect / as sysdba;
select * from v$version;
exit;
[Fri Jun 1 16:51:42 2012] DEBUG: Executing external sql script [/tmp/aFEfsA3wjb.sc] for database TMAS
[Fri Jun 1 16:51:42 2012] DEBUG: Command [/bin/su - oratmas -c "ORACLE_HOME=/ou52/app/oratmas/product/11.2.0/db_1;export ORACLE_HOME;ORACLE_SID=TMAS;export ORACLE_SID;/ou52/app/oratmas/product/11.2.0/db_1/bin/sqlplus /nolog @/tmp/aFEfsA3wjb.sc"] finished with
exit code: [0]
stdout: [Oracle Corporation SunOS 5.10 Generic Patch January 2005
TMAS 11.2.0 Environment Sourced...
SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 1 16:51:42 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Solaris: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options]
stderr: []
[Fri Jun 1 16:51:42 2012] DEBUG: Executing external sql script [/tmp/aFEfsA3wjb.sc] for database TMAS finished successfully
[Fri Jun 1 16:51:42 2012] DEBUG: Verifying correct of database TMAS finished successfully
[Fri Jun 1 16:51:42 2012] DEBUG: Database TMAS is running Oracle 11
[Fri Jun 1 16:51:42 2012] DEBUG: Verifying RAC status for database TMAS
[Fri Jun 1 16:51:42 2012] DEBUG: Executing SQL sequence:
connect / as sysdba;
show parameter CLUSTER_DATABASE;
exit;
[Fri Jun 1 16:51:42 2012] DEBUG: Executing external sql script [/tmp/snotFytlLl.sc] for database TMAS
[Fri Jun 1 16:51:42 2012] DEBUG: Command [/bin/su - oratmas -c "ORACLE_HOME=/ou52/app/oratmas/product/11.2.0/db_1;export ORACLE_HOME;ORACLE_SID=TMAS;export ORACLE_SID;/ou52/app/oratmas/product/11.2.0/db_1/bin/sqlplus /nolog @/tmp/snotFytlLl.sc"] finished with
exit code: [0]
stdout: [Oracle Corporation SunOS 5.10 Generic Patch January 2005
TMAS 11.2.0 Environment Sourced...
SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 1 16:51:42 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database boolean FALSE
cluster_database_instances integer 1
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options]
stderr: []
[Fri Jun 1 16:51:42 2012] DEBUG: Executing external sql script [/tmp/snotFytlLl.sc] for database TMAS finished successfully
[Fri Jun 1 16:51:42 2012] DEBUG: Database TMAS is not configured in RAC
[Fri Jun 1 16:51:42 2012] DEBUG: Verifying RAC status for database TMAS finished successfully
[Fri Jun 1 16:51:42 2012] DEBUG: Verifying archive log mode of database TMAS
[Fri Jun 1 16:51:42 2012] DEBUG: Executing SQL sequence:
connect / as sysdba;
select name, log_mode from sys.v$database;
exit;
[Fri Jun 1 16:51:42 2012] DEBUG: Executing external sql script [/tmp/ArAstg1Xyq.sc] for database TMAS
[Fri Jun 1 16:51:43 2012] DEBUG: Command [/bin/su - oratmas -c "ORACLE_HOME=/ou52/app/oratmas/product/11.2.0/db_1;export ORACLE_HOME;ORACLE_SID=TMAS;export ORACLE_SID;/ou52/app/oratmas/product/11.2.0/db_1/bin/sqlplus /nolog @/tmp/ArAstg1Xyq.sc"] finished with
exit code: [0]
stdout: [Oracle Corporation SunOS 5.10 Generic Patch January 2005
TMAS 11.2.0 Environment Sourced...
SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 1 16:51:42 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
NAME LOG_MODE
--------- ------------
TMAS ARCHIVELOG
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options]
stderr: []
[Fri Jun 1 16:51:43 2012] DEBUG: Executing external sql script [/tmp/ArAstg1Xyq.sc] for database TMAS finished successfully
[Fri Jun 1 16:51:43 2012] ERROR: [ora-00005] Database TMAS is not configured in Archive Log Mode
[Fri Jun 1 16:51:43 2012] DEBUG: Reloading configuration from /opt/NTAP/scAgent3.4.0/config/agent.conf
[Fri Jun 1 16:51:43 2012] DEBUG: Reloading configuration finished with
exit code: [0]
stdout: []
stderr: []
[Fri Jun 1 16:51:43 2012] DEBUG: 10.1.5.41 is allowed to send requests
[Fri Jun 1 16:51:43 2012] INFO: Authorized request from [scServer@*]
[Fri Jun 1 16:51:43 2012] INFO: Authorized request from [scServer@*]
[Fri Jun 1 16:51:43 2012] INFO: Unquiescing databases
[Fri Jun 1 16:51:43 2012] INFO: Unquiescing database TMAS
[Fri Jun 1 16:51:43 2012] DEBUG: Ending hot backup mode for database TMAS
[Fri Jun 1 16:51:43 2012] DEBUG: Executing SQL sequence:
connect / as sysdba;
alter database end backup;
alter system archive log current;
exit;
[Fri Jun 1 16:51:43 2012] DEBUG: Executing external sql script [/tmp/sUk_H84MBL.sc] for database TMAS
[Fri Jun 1 16:51:43 2012] DEBUG: Command [/bin/su - oratmas -c "ORACLE_HOME=/ou52/app/oratmas/product/11.2.0/db_1;export ORACLE_HOME;ORACLE_SID=TMAS;export ORACLE_SID;/ou52/app/oratmas/product/11.2.0/db_1/bin/sqlplus /nolog @/tmp/sUk_H84MBL.sc"] finished with
exit code: [0]
stdout: [Oracle Corporation SunOS 5.10 Generic Patch January 2005
TMAS 11.2.0 Environment Sourced...
SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 1 16:51:43 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
alter database end backup
*
ERROR at line 1:
ORA-01142: cannot end online backup - none of the files are in backup
System altered.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options]
stderr: []
[Fri Jun 1 16:51:43 2012] ERROR: [ora-00020] Oracle SQL*Plus command [/bin/su - oratmas -c "ORACLE_HOME=/ou52/app/oratmas/product/11.2.0/db_1;export ORACLE_HOME;ORACLE_SID=TMAS;export ORACLE_SID;/ou52/app/oratmas/product/11.2.0/db_1/bin/sqlplus /nolog @/tmp/sUk_H84MBL.sc"] failed with return code [0] and message [Oracle Corporation SunOS 5.10 Generic Patch January 2005
TMAS 11.2.0 Environment Sourced...
SQL*Plus: Release 11.2.0.3.0 Production on Fri Jun 1 16:51:43 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
alter database end backup
*
ERROR at line 1:
ORA-01142: cannot end online backup - none of the files are in backup
System altered.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
]
[Fri Jun 1 16:51:43 2012] DEBUG: Executing external sql script [/tmp/sUk_H84MBL.sc] for database TMAS finished successfully
[Fri Jun 1 16:51:43 2012] ERROR: [ora-00010] Ending hot backup mode for database TMAS failed
Best Reards,
Pinyapatthara
The reason it isnt working is we are getting this error
[Fri Jun 1 16:51:43 2012] ERROR: [ora-00005] Database TMAS is not configured in Archive Log Mode
Not sure why we are getting this error since the DB is in archive log mode
NAME LOG_MODE
--------- ------------
TMAS ARCHIVELOG
Very strange...
Can you please post your oracle settings and agent settings?
Regards,
Keith
I check database in archive mode
bash-3.00# su - oratmas
Oracle Corporation SunOS 5.10 Generic Patch January 2005
TMAS 11.2.0 Environment Sourced...
ORATMAS@EP-MS-DEV >sqlplus '/ as sysdba'
SQL*Plus: Release 11.2.0.3.0 Production on Tue Jun 5 08:35:52 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 506
Next log sequence to archive 508
Current log sequence 508
SQL>
regards,
Pinyapatthara
Hi Keith
Sorry ,I just start to take snapcreator and i do not know where is the agent setting file. Please tell me to search this files.
regards,
Pinyapatthara
WA has been trying to reproduce this error an we are unsuccessful. We cannot reproduce this error using same version of oracle. We tested Oracle 11g (11.2.0.3.0) on RHEL 6.1 64bit. There must be something on this specific environment causing the issue. Do you have any other databases you can try it on? Are you able to reproduce the erros in other database environments?
The agent settings are located in the Snap Creator configuration file.
SC_AGENT=10.61.181.225:9090
SC_AGENT_TIMEOUT=60
SC_AGENT_UNQUIESCE_TIMEOUT=
SC_CLONE_TARGET=
SC_AGENT_WATCHDOG_ENABLE=Y
SC_AGENT_LOG_ENABLE=Y
Please post the entire Snap Creator configuration file, it is a text file so copy/paste?
Regards,
Keith
Hi Keith
I attach the snap create framework.conf .this server has two instant (tmas,qmas) . I type to create snap creator on TMAS instant. Please help me .
########################################################
### NetApp Snap Creator Framework Configuration File ###
########################################################
##############################
### Referencing a Variable ###
##############################
#########################################################################################
# Reference previously assigned variables or variables #
# assigned within Snap Creator itself by prepending #
# the variable name with "%". #
# #
# For example) If you want to reference a VARIABLE: #
# BLAH=/usr/local/bin/foo %USER_DEFINED #
# Below is a list of all the built in referenceable variables: #
# %SNAP_TYPE - The Snapshot Schedule: daily, monthly, etc #
# %SNAP_TIME - An epoch time stamp associated with the snapshot #
# %ACTION - Could be -snap,-clone_vol, or -clone_lun, -arch, -list depending on which #
# Action you used #
# %MSG - Used to send the error message to other monitoring tools or email. Can only #
# be used with SENDTRAP function #
# %USER_DEFINED - Pass a user defined argument to the script, a good example would be #
# in order to integrate with a backup application like netbackup, #
# you may need to pass into configuration file the netbackup schedule #
# desired in the case where you have multiple schedules #
#########################################################################################
#############################
### Snap Creator Settings ###
#############################
#################################################################################################################
# CONFIG_TYPE (required) - (PLUGIN|STANDARD) We can have two types of configuration in SC 3.4.0 app or #
# standard. We can use multiple app configs to build up complex quiesce and unquiesce workflows. #
# SNAME - (required) Your snapshot naming convention, should be unique, snapshots on netapp will be deleted #
# according to this naming convention #
# SNAP_TIMESTAMP_ONLY - (Y|N) Setting to set snapshot naming convention "Y" will create _recent, "N" will #
# use a human readable time stamp #
# VOLUMES - (required) List of primary filers and volumes you want to snapshot #
# ie: filer1:vol1,vol2,vol3;filer2:vol1;filer3:vol2,vol3 #
# VFILERS - List of primary vfilers and their hosting filer/volumes #
# ie: vfiler1@filer1:vol1,vol2,vol3;vfiler2@filer2:vol1,vol2,vol3 #
# SNAPMIROR_VOLUMES - List of primary filers and volumes you want to perform a snapmirror update on #
# ie: filer1:vol1,vol2,vol3;filer2:vol1;filer3:vol2,vol3 #
# SNAPVAULT_VOLUMES - List of primary filers and volumes you want to perform a snapvault update on #
# ie: filer1:vol1,vol2,vol3;filer2:vol1;filer3:vol2,vol3 #
# SNAPVAULT_QTREE_INCLUDE - List of primary filers and qtree paths which should be included in snapvault #
# update. Without this option all qtrees under a volume will be backed up #
# qtrees listed here will be backed up, the rest will be ignored. #
# ie: filer1:/vol/qtree/qtree1,/vol/volume/qtree2;filer2:/vol/volume/qtree1 #
# SNAPMIRROR_CASCADING_VOLUMES List of secondary filers and volumes where you want to perform a snapmirror #
# update from snapvault source volume (snapshot->snapvault->snapmirror) #
# ie: sec-filer1:vol1-sec,vol2-sec #
# NTAP_USERS - (required) List of primary/secondary filers and their corresponding usernames/passowrds #
# ie: filer1:joe/password1;filer2:bob/password2;filer3:ken/password3 #
# SECONDARY_INTERFACES - List of primary filers or vfilers and there secondary interfaces source/destination #
# for snapvault and snapmirror relationships #
# ie: filer1:filer1-source/filer2-destination #
# USE_PROXY - (Y|N) Setting which allows API calls to go through DFM proxy instead of storage controller #
# directly. If this option is used NTAP_USERS is no longer required. #
# MANAGEMENT_INTERFACES - List of primary filers and their management interfaces used for communications #
# ie: filer1:filer1-mgmt;filer2:filer2-mgmt #
# NTAP_PWD_PROTECTION - (Y|N) Setting for enabling password protection, you must first create a scambled #
# password and then save that password in config file #
# TRANSPORT - (HTTP|HTTPS) Setting to use either HTTP or HTTPS to connect to NetApp filer #
# PORT - (80|443) Setting which configures which port number on the NetApp filer to use #
# LOG_NUM - The number of logs .debug and .out for Snap Creator to keep #
# SC_TMP_DIR - The directory used for storing temporary created files #
# The directory must exists and must be writable. #
# If not specified, the system default will be used. #
# SNAPDRIVE - (Y|N) Setting which allows you to use snapdrive instead of ZAPI for snapshot creation #
# SNAPDRIVE_DISCOVERY - (Y|N) Setting which enables use of snapdrive for storage discovery, required in SAN #
# or iSAN environment when using VOLUME_VALIDATION #
# NTAP_SNAPSHOT_DISABLE - (Y|N) Setting which tells Snap Creator to not take a snapshot. The idea of the option #
# is that Snap Creator can handle SnapVault or SnapMirror for SnapManager. In order for #
# that to work the SnapManager snapshots need to follow this naming convention #
# <snapshot name>-<policy>_recent #
# NTAP_SNAPSHOT_CREATE_CMD<#> - SnapDrive command to create a snapshot and flush the file system buffers #
# where "#" is a number 01-xx #
# NTAP_SNAPSHOT_RETENTIONS - (required) Setting which determines the number of netapp snapshots you want to #
# retain for a given policy ie: daily:7,weekly:4,monthly:1 #
# NTAP_SNAPVAULT_RETENTIONS - Setting which determines the number of netapp snapshots on the snapvault #
# secondary you want to retain for a given policy #
# ie: daily:21,weekly:12,monthly:3 #
# NTAP_SNAPVAULT_SNAPSHOT - (Y|N) Setting which enables creation of snapvault snapshot. A snapshot compatible #
# with the storage controller snapvault scheduler. Snapshots are named #
# sv_<policy>.<#> and deletion is handled by the storage controller #
# NTAP_SNAPSHOT_RETENTION_AGE - Setting in (days) which allows you to define a retention age for snapshots. If #
# configured snapshots will only be deleted if there are more than defined in #
# SNAPSHOT Retentions and if they are older than retention age (days) #
# NTAP_SNAPVAULT_RETENTION_AGE - Setting in (days) which allows you to define a retention age for snapvault #
# snapshots. If configured snapvault snapshots will only be deleted if #
# there are more than defined in SNAPVAULT Retentions and if they are older #
# than retention age (days) #
# NTAP_SNAPSHOT_NODELETE - Setting which will override NTAP_SNAPSHOT_RETENTIONS and ensure no snapshots are #
# deleted, leaving this on can cause your netapp volume to fill up #
# NTAP_SNAPVAULT_NODELETE - Setting which will override NTAP_SNAPVAULT_RETENTIONS and ensure no snapshots are #
# deleted, leaving this on can cause your netapp volume to fill up #
# NTAP_SNAPVAULT_RESTORE_WAIT - (Y|N) Setting which waits for snapvault restore to complete. This is #
# recommended since after restore is complete user will be prompted to delete the #
# baseline snapshot which gets created on the primary volume #
# NTAP_SNAPMIRROR_UPDATE - (Y|N) Setting which allows you to turn off and on the snapmirror update function #
# NTAP_SNAPMIRROR_CASCADING_UPDATE - (Y|N) Setting which allows you to turn off and on the cascading snapmirror #
# function (snapshot->snapvault->snapmirror) #
# NTAP_SNAPVAULT_UPDATE - (Y|N) Setting which allows you to turn off and on the snapvault update function #
# NTAP_PM_UPDATE - (Y|N) Setting which allows you to turn off and on the protection manager update function #
# NTAP_SNAPVAULT_WAIT - Time in minutes where we will wait for snapvault update process to complete before #
# taking a snapshot on the snapvault secondary #
# NTAP_SNAPMIRROR_WAIT - Time in minutes where we will wait for snapmirror update process to complete #
# NTAP_SNAPMIRROR_USE_SNAPSHOT - (Y|N) Setting which determines if the snapshot will be sent with the #
# snapmirror update #
# NTAP_SNAPVAULT_MAX_TRANSFER - Maximum bandwidth for snapvault to consume in kbps, if left blank snapvault #
# will consume as much bandwidth as possible #
# NTAP_SNAPMIRROR_MAX_TRANSFER - Maximum bandwidth for snapmirror to consume in kbps, if left blank snapmirror #
# will consume as much bandwidth as possible #
# NTAP_VOL_CLONE_RESERVE - (none|file|volume) Space Guarantee for cloned volume #
# NTAP_LUN_CLONE_RESERVATION - (true|false) If true will reserve space for cloned luns if false will not #
# NTAP_CLONE_IGROUP_MAP - List of filer(s), source volume(s), and the Igroups used to map cloned volumes and #
# luns. IE: filer1:src_volume1/igroup1;filer2:src_volume2/igroup2 #
# NTAP_CLONE_FOR_BACKUP - (Y|N) Setting which determines when the clone is deleted. "Y" will delete clone #
# after it is created, "N" will delete clone before snapshot occurs so during #
# the next run of Snap Creator for the given policy and config. #
# NTAP_CLONE_SECONDARY - (Y|N) Setting which determines where to perform a clone. If "Y" is selected #
# the snapmirror destination will be cloned. This only works with snapmirror #
# and you must set NTAP_SNAPMIRROR_USE_SNAPSHOT=Y #
# NTAP_CLONE_SECONDARY_VOLUMES - Mapping of primary filers and their secondary Filer/volume #
# ie: filer1:secondaryFiler1/vol1;filer2:secondaryFiler2/vol2 #
# NTAP_NUM_VOL_CLONES - Setting which configures how many volume clones Snap Creator will keep #
# NTAP_NFS_EXPORT_HOST - (honstname|ip) The hostname or IP address of the server where cloned volume should be #
# exported #
# NTAP_NFS_EXPORT_ACCESS - (root|read-write|read-only) Setting which controls access permission to cloned vol #
# NTAP_NFS_EXPORT_PERSISTENT -(true|false) Setting which allows for export permissions of cloned vol to be #
# saved in the /etc/exports file on the storage controller. #
# NTAP_DFM_DATA_SET - List of filers and their protection manager to volume correlations #
# ie: filer1:dataset1/vol1,vol2;filer1:dataset2/vol3 #
# NTAP_PM_RUN_BACKUP - (Y|N) Setting which enables starting the Protection Manager backup as soon as the #
# snapshot registration process is complete. Eliminates need to schedule secondary #
# backup in Protection Manager. #
# NTAP_CONSISTENCY_GROUP_SNAPSHOT - (Y|N) Setting which enables use of consistency groups for creating #
# consistent snapshot across multiple volumes ie: IO Fencing #
# NTAP_CONSISTENCY_GROUP_TIMEOUT - (urgent|medium|relaxed) Setting which defines how long filer will wait for #
# consistently group snapshot (urgent=2sec,medium=7sec,relaxed=20sec) #
# NTAP_CONSISTENCY_GROUP_WAFL_SYNC - (Y|N) Setting which can improve performance of the CG snapshot by forcing #
# a CP through a wafl-sync before the cg-start #
# NTAP_SNAPSHOT_DELETE_BY_AGE_ONLY - {PRIMARY|SECONDARY|BOTH|N} Setting which allows the deletion of outdated #
# snapshots, regardless of the retention count #
# NTAP_SNAPSHOT_DEPENDENCY_IGNORE - (Y|N) Setting which allows for ignoring snapshot dependencies when #
# prompted for deletion using the "--action delete" option #
# NTAP_SNAPSHOT_RESTORE_AUTO_DETECT - (Y|N) Setting which if disabled will always force a SFSR when doing a #
# single file restore #
# NTAP_OSSV_ENABLE - (Y|N) Setting which enables the Open Systems Snapvault (OSSV) integration. This option #
# must be used in combination with the NTAP_OSSV_HOMEDIR parameter. OSSV is also #
# required on the host running Snap Creator #
# NTAP_OSSV_HOMEDIR - The path to the OSSV home directory IE: /usr/snapvault #
# NTAP_OSSV_FS_SNAPSHOT - (Y|N) Setting which enables ability to create file system snapshot. The file system #
# snapshot command must be provided using the NTAP_OSSV_FS_SNAPSHOT_CREATE_CMD<#> #
# parameter #
# NTAP_OSSV_FS_SNAPSHOT_CREATE_CMD<#> - Script or command that takes a file system snapshot in use with ossv #
# backup, where "#" is a number between 01-xx #
# OM_HOST - Name or IP Address of your Operations Manager system #
# OM_USER - User name of an Operations Manager user which has privilages to create events #
# OM_PWD - Password of the above Operations Manager User #
# OM_EVENT_GENERATE - (Y|N) Setting which will enable event creation in Operations Manager #
# APP_QUIESCE_CMD<#> - Script or command that puts your application into backup mode, where "#" is a #
# number between 01-xx #
# APP_UNQUIESCE_CMD<#> - Script or command that takes your application out of backup mode, where "#" is a #
# number between 01-xx #
# ARCHIVE_CMD<#> - The archive command command where "#" is a number 01-xx #
# MOUNT_CMD<##> - Mount commands to be used to mount file system for cloning or mount actions where "#" is a #
# number 01-xx #
# UMOUNT_CMD<##> - Umount commands to be used to mount file system for cloning or mount actions where "#" is a #
# number 01-xx #
# PRE_APP_QUIESCE_CMD<#> - The pre application quiesce command where "#" is a number 01-xx #
# PRE_NTAP_CMD<#> - The pre netapp command where "#" is a number 01-xx #
# PRE_APP_UNQUIESCE_CMD<#> - The pre application unquiesce command where "#" is a number 01-xx #
# PRE_NTAP_CLONE_DELETE_CMD<#> - The pre netapp clone delete command where "#" is a number 01-xx #
# PRE_RESTORE_CMD<#> - The pre restore command where "#" is a number 01-xx #
# PRE_EXIT_CMD<#> - Command which will run before Snap Creator exists due to an error #
# ie: you want to return application or backup into normal mode before Snap Creator exist #
# due to an error. Where "#" is a number between 01-xx #
# PRE_CLONE_CREATE_CMD<#> - The pre clone create command, where "#" is a number 01-xx #
# POST_APP_QUIESCE_CMD<#> - The post application quiesce command where "#" is a number 01-xx #
# POST_NTAP_CMD<#> - The post netapp command where "#" is a number 01-xx #
# POST_APP_UNQUIESCE_CMD<#> - The post application unquiesce command where "#" is a number 01-xx #
# POST_NTAP_DATA_TRANSFER_CMD<#> - The post data transfer command runs after SnapVault or SnapMirror transfer #
# Where "#" is a number 01-xx. #
# POST_RESTORE_CMD<#> - The post restore command where 2#" is a number 01-xx #
# POST_CLONE_CREATE_CMD<#> - The post clone create command, where "#" is a number 01-xx #
# NTAP_ASUP_ERROR_ENABLE - (Y|N) Setting which enables Snap Creator error messages to also log an auto support #
# message on the NetApp storage controller. Snap Creator will always create an info #
# auto support message when the backup has started and completed #
# SENDTRAP - Command which interfaces with your monitoring software or email, allows you to pass alerts #
# generated from Snap Creator into your own monitoring infrastructure. The %MSG variable is the #
# message sent from Snap Creator #
# SUCCESS_TRAP - Command which interfaces with your monitoring software or email, allows you to pass the #
# success message generated from Snap Creator into your own monitoring infrastructure #
# The %SUCCESS_MSG variable is the success message for Snap Creator #
# SUCCESS_MSG - Upon Snap Creator success will log the message you define and also send it to SENDTRAP, if #
# SENDTRAP is defined #
#################################################################################################################
########################
### Required Options ###
########################
CONFIG_TYPE=STANDARD
SNAME=tmas_qmas
SNAP_TIMESTAMP_ONLY=Y
VOLUMES=FAS6210-SAS:dmas_oracledat,dmas_oracleloga,dmas_oraclelogb
NTAP_SNAPSHOT_RETENTIONS=daily:7
NTAP_USERS=FAS6210-SAS:root/53616c7465645f5f751ecce4e68d803f66fcbe83902f35088b578f5f3320a5f0;FAS6210-SATA:root/53616c7465645f5f2ca84030635d71363835e6943abe3559694ad67dfcd41a7d
NTAP_PWD_PROTECTION=Y
TRANSPORT=HTTP
PORT=80
LOG_NUM=10
SC_TMP_DIR=
##########################
### Connection Options ###
##########################
VFILERS=
MANAGEMENT_INTERFACES=
SECONDARY_INTERFACES=
USE_PROXY=N
########################
### Snapshot Options ###
########################
NTAP_SNAPSHOT_RETENTION_AGE=
SNAPDRIVE=N
SNAPDRIVE_DISCOVERY=N
NTAP_SNAPSHOT_DISABLE=N
NTAP_SNAPSHOT_NODELETE=N
NTAP_CONSISTENCY_GROUP_SNAPSHOT=N
NTAP_CONSISTENCY_GROUP_TIMEOUT=medium
NTAP_CONSISTENCY_GROUP_WAFL_SYNC=N
NTAP_SNAPSHOT_DELETE_BY_AGE_ONLY=N
NTAP_SNAPSHOT_DEPENDENCY_IGNORE=N
NTAP_SNAPSHOT_RESTORE_AUTO_DETECT=Y
#########################
### SnapVault Options ###
#########################
NTAP_SNAPVAULT_UPDATE=Y
SNAPVAULT_VOLUMES=FAS6210-SAS:dmas_oracledat,dmas_oracleloga,dmas_oraclelogb
SNAPVAULT_QTREE_INCLUDE=
NTAP_SNAPVAULT_RETENTIONS=daily:30
NTAP_SNAPVAULT_RETENTION_AGE=30
NTAP_SNAPVAULT_SNAPSHOT=N
NTAP_SNAPVAULT_NODELETE=N
NTAP_SNAPVAULT_RESTORE_WAIT=N
NTAP_SNAPVAULT_WAIT=60
NTAP_SNAPVAULT_MAX_TRANSFER=
##########################
### SnapMirror Options ###
##########################
NTAP_SNAPMIRROR_UPDATE=N
NTAP_SNAPMIRROR_CASCADING_UPDATE=N
SNAPMIRROR_VOLUMES=
SNAPMIRROR_CASCADING_VOLUMES=
NTAP_SNAPMIRROR_WAIT=60
NTAP_SNAPMIRROR_USE_SNAPSHOT=N
NTAP_SNAPMIRROR_MAX_TRANSFER=
#######################
### Cloning Options ###
#######################
NTAP_VOL_CLONE_RESERVE=none
NTAP_LUN_CLONE_RESERVATION=false
NTAP_CLONE_IGROUP_MAP=
NTAP_CLONE_FOR_BACKUP=Y
NTAP_CLONE_SECONDARY=N
NTAP_CLONE_SECONDARY_VOLUMES=
NTAP_NUM_VOL_CLONES=1
NTAP_NFS_EXPORT_HOST=
NTAP_NFS_EXPORT_ACCESS=
NTAP_NFS_EXPORT_PERSISTENT=
##################################
### Protection Manager Options ###
##################################
NTAP_PM_UPDATE=N
NTAP_DFM_DATA_SET=
NTAP_PM_RUN_BACKUP=N
####################
### OSSV Options ###
####################
NTAP_OSSV_ENABLE=N
NTAP_OSSV_HOMEDIR=
NTAP_OSSV_FS_SNAPSHOT=N
NTAP_OSSV_FS_SNAPSHOT_CREATE_CMD01=
###################################
### Operations Manager Settings ###
###################################
OM_HOST=xx.xx.xx.xxx
OM_USER=xxxxxxxxx
OM_PWD=53616c7465645f5ff399c875799452ecc7cf3124f82aedd7
OM_PORT=8088
OM_EVENT_GENERATE=N
####################
### APP Commands ###
####################
ARCHIVE_CMD01=
MOUNT_CMD01=
UMOUNT_CMD01=
####################
### Pre Commands ###
####################
PRE_APP_QUIESCE_CMD01=
PRE_NTAP_CMD01=
PRE_NTAP_CLONE_DELETE_CMD01=
PRE_APP_UNQUIESCE_CMD01=
PRE_RESTORE_CMD01=
PRE_CLONE_CREATE_CMD01=
#####################
### Post Commands ###
#####################
POST_APP_QUIESCE_CMD01=
POST_NTAP_CMD01=
POST_NTAP_DATA_TRANSFER_CMD01=
POST_APP_UNQUIESCE_CMD01=
POST_RESTORE_CMD01=
POST_CLONE_CREATE_CMD01=
###########################
### Event Configuration ###
###########################
NTAP_ASUP_ERROR_ENABLE=N
SENDTRAP=
SUCCESS_TRAP=
SUCCESS_MSG=INFO: NetApp Snap Creator Framework finished successfully ( Action: %ACTION )
###################################
### Client/Server Configuration ###
###################################
##################################################################################################################
# SC_AGENT - <hostname or ip>:<port> #
# Snap Creator has the capability to perform tasks on remote hosts. A task is either a defined module #
# (parameter APP_NAME) or a command specified with the parameters *_CMD*, e. g. NTAP_SNAPSHOT_CREATE_CMD01 #
# #
# To specify a remote host, enter it's name or ip address followed by a colon and the port, the Snap Creator #
# Agent listening on. #
# On the remote host, start the Snap Creator Agent #
# <path to Snap Creator>/snapcreator --start-agent <port> #
# #
# If running in local mode, the parameter must be left blank. #
# #
# SC_CLONE_TARGET - <hostname or ip of the clone target>:<port> #
# Snap Creator has the capability to perform clone operations. Using the action 'clone_vol' in combination #
# with {PRE|POST}_CLONE_CREATE_CMDxx to handle the storage objects on the remote side #
# (e. g. mounting/unmounting filesytems), either the module is enabled to perform all the necessary #
# activities to #
# #
# To specify a clone target, enter it's name or ip address followed by a colon and the port, the Snap Creator #
# Agent listening on. #
# #
# SC_AGENT_TIMEOUT - Number, timeout in seconds #
# The implemented client/server architecture uses a timeout mechanism. This means, if the client does not #
# respond in the specified interval, the server fails with a timeout message. However, the task on the client #
# is left untouched (will not be aborted) and needs further investigation. #
# #
# On server with high load or known, long running tasks like own scripts or complex SnapDrive operations #
# it might be necessary to extend the timeout and adapt this value on your own needs. #
# #
# Per default, a timeout of 300 seconds is used. #
# #
# SC_AGENT_WATCHDOG_ENABLE - (Y|N) #
# Snap Creator starts a watchdog process while quiescing the database. After the period specified with #
# SC_AGENT_UNQUIESCE_TIMEOUT, the database will be brought into normal operation automatically. #
# #
# SC_AGENT_UNQUIESCE_TIMEOUT - Number, timeout in seconds #
# Time to wait after a database quiesce operation to bring back the database into normal operation mode again. #
# This is only available in combination with SC_AGENT_WATCHDOG_ENABLE=Y #
# #
# Per default, SC_AGENT_TIMEOUT + 5 is used. #
##################################################################################################################
SC_AGENT=hostname:9090
SC_AGENT_TIMEOUT=
SC_AGENT_UNQUIESCE_TIMEOUT=
SC_CLONE_TARGET=
SC_AGENT_WATCHDOG_ENABLE=
#############################
### Plugin Module Options ###
#############################
##################################################################################################################################
# APP_NAME - (oracle|db2|mysql|domino|vibe|smsql|sybase|<plugin>) Setting which determines which plug-in is being used. #
# Snap Creator has built-in support for the listed applications. A community plug-in can be used or you can configure #
# APP_QUIESCE_CMD01, APP_UNQUIESCE_CMD01, and PRE_EXIT_CMD01 #
# #
# APP_IGNORE_ERROR - (Y|N) Will cause Snap Creator to not exit when encountering an application error. An error #
# message will be sent if SENDTRAP is configured but Snap Creator will not exit. This may be useful if you #
# are backing up multiple databases and dont want a single database failure to stop the backup #
# #
# VALIDATE_VOLUMES - (DATA) Snap Creator validates that all volumes where the database resides are in fact part of the backup. #
# Currently, there are some limitations. Only NFS is supported and only for db2, maxdb, mysql, and Oracle. #
# Currently, this option only checks data files only for the above databases. Going forward, support for more data #
# types like LOG will be added. #
# #
# APP_DEFINED_RESTORE - (Y|N) The normal cli restore insterface will not be shown. Instead the plug-in is responsible for #
# handling all restore activities including the restore of the snapshot. The built-in plug-ins do not support #
# this type of a restore #
# #
# APP_DEFINED_CLONE - (Y|N) The built-in cloning abilities of Snap Creator will be ignored. Instead the plug-in is responsible #
# for handling all clone activities including vol or lun clone creation and deletion. The built-in plug-ins do not #
# support this type of clone. #
# APP_AUTO_DISCOVERY - (Y|N) Similar to APP_DEFINED _RESTORE this parameter allows the plug-in to handle application discovery. #
# As with restore the plug-in must handle this operation itself #
# #
# APP_CONF_PERSISTENCY - (Y|N) If APP_AUTO_DISCOVERY is used configuration parameters can be changed dynamically. This setting #
# allows any changes to be saved, meaning the configuration file updated dynamically. #
##################################################################################################################################
APP_NAME=oracle
APP_IGNORE_ERROR=N
VALIDATE_VOLUMES=
APP_DEFINED_RESTORE=N
APP_DEFINED_CLONE=N
APP_AUTO_DISCOVERY=N
APP_CONF_PERSISTENCE=Y
############################
### Archive Log Settings ###
############################
##########################################################################################
# ARCHIVE_LOG_ENABLE - (Y|N) Setting which Enables Archive Log Management #
# (deletion of old archive logs) #
# ARCHIVE_LOG_RETENTION - Retention in Days for how long archive logs should be kept #
# ARCHIVE_LOG_DIR - Path to where the archive logs are stored #
# ARCHIVE_LOG_EXT - File Extension for the archive logs, must be <something>.<extension> #
# ie: 109209011.log in which case you would enter "log" #
##########################################################################################
ARCHIVE_LOG_ENABLE=N
ARCHIVE_LOG_RETENTION=7
ARCHIVE_LOG_DIR=
ARCHIVE_LOG_EXT=
###############################
### General Plugin Settings ###
###############################
####################
### DB2 Settings ###
####################
#################################################################################################################
# DB2_DATABASES - List of database(s) and their username separated by a comma #
# #
# DB2_DATABASES=db1:user1;db2:user2 #
# #
# DB2_CMD - Path to the db2 command, for connecting to the database(s). #
# If not specified, sqllib/db2 will be used. #
# #
#################################################################################################################
DB2_DATABASES=
DB2_CMD=/path/to/db2
#######################
### MYSQL Settings ###
#######################
#################################################################################################################
# MYSQL_DATABASES - List of database(s) and their username/password separated by a comma of which you #
# #
# MYSQL_DATABASES=db1:user1/password;db2:user2/password #
# #
# HOST - Name of Host where the database(s) are running, ie: localhost #
# PORTS - List of Database(s) and the ports they are listening on #
# #
# PORTS=db1:3307;db2:3308 #
# #
# MASTER_SLAVE - (Y|N)If the Database(s) are part of a MASTER or SLAVE Environment #
#################################################################################################################
MYSQL_DATABASES=
HOST=
PORTS=
MASTER_SLAVE=N
#######################
### Oracle Settings ###
#######################
#################################################################################################################
# ORACLE_DATABASES - A list of Database(s) and the username ie: db1:user1;db2:user2 #
# #
# ORACLE_DATABASES=db1:user1;db2:user2 #
# #
# SQLPLUS_CMD - PATH to the sqlplus command #
# CNTL_FILE_BACKUP_DIR - Path to the directory where we should store backup cntl files #
# (must be owned by oracle user) #
# ORA_TEMP - Path to a directoy for storing temp files ie: /tmp (oracle user must have permissions) #
# ARCHIVE_LOG_ONLY - (Y|N) Tells Oracle Module to only do a switch log, useful if you are handling #
# archive logs separate from data backup #
# ORACLE_HOME - PATH to the Oracle home directory #
# ORACLE_HOME_<SID> - PATH to Oracle home directory for a given SID. When backing up multiple databases it #
# may be important to specfiy more than one Oracle home #
# ORACLE_EXPORT_PARAMETERS - (Y|N) The ORACLE_HOME and ORACLE_SID environment parameres will be exported using #
# the export command. This only applies to Unix systems #
#################################################################################################################
ORACLE_DATABASES=TMAS:oratmas
SQLPLUS_CMD=/ou52/app/oratmas/product/11.2.0/db_1/bin/sqlplus
CNTL_FILE_BACKUP_DIR=/tmp
ORA_TEMP=/tmp
ARCHIVE_LOG_ONLY=N
ORACLE_HOME=/ou52/app/oratmas/product/11.2.0/db_1
ORACLE_EXPORT_PARAMETERS=Y
#####################################
### SnapManager Exchange Settings ###
#####################################
#################################################################################################################
# SME_PS_CONF - Path to the powershell configuration file for SME #
# #
# SME_PS_CONF="C:\Program Files\NetApp\SnapManager for Exchange\smeShell.psc1" #
# #
# SME_BACKUP_OPTIONS - SME backup options, Snap Creator uses a powershell new-backup cmdlet #
# #
# SME_BACKUP_OPTIONS=-Server 'EX2K10-DAG01' -GenericNaming -ManagementGroup 'Standard' -NoTruncateLogs $False #
# -RetainBackups 8 -StorageGroup 'dag01_db01' -BackupCopyRemoteCCRNode $False #
# #
# SME_SERVER_NAME - SME server name #
# SME_32bit - (Y|N) Setting to enable use of 32bit version of powershell #
#################################################################################################################
SME_PS_CONF=
SME_BACKUP_OPTIONS=
SME_SERVER_NAME=servername
SME_32bit=N
###################################
### Sybase Settings (unix only) ###
###################################
#################################################################################################################
# #
# GLOBAL (CORE) SNAP CREATOR CONFIGURATION (external to the SYBABE module) #
# #
# - Set 'APP_NAME=SYBASE' #
# - If you want to validate configutation then set 'VALIDATE_VOLUMES=DATA,OFFLINE_LOG,EXTERNAL_FILES' #
# - DATA - logs and data as resported by sp_helpdb #
# - OFFLINE_LOG - Dump location, extracted from SYBASE_TRAN_DUMP #
# - EXTENAL_FILE - Manifest files, extracted from SYBASE_MANIFEST #
# - If you want to auto discovery storage then set "APP_AUTO_DISCOVERY=Y" as NFS based filer volumes #
# which will override the "VOLUMES" option #
# - If you want to permanantly update the configuration file with the results of the storage auto #
# discovery then set "APP_CONF_PERSISTENCE=Y" #
# - If you want to use encrypted password then set "NTAP_PWD_PROTECTION=Y". NOTE this also sets #
# encryption on filer passwords. #
# #
# LOCAL (SYBASE module) SNAP CREATOR CONFIGURATION (internal to the SYBASE module) #
# #
# - SYBASE_SERVER the sybase dataserver name (-S option on isql) #
# example: SYBASE_SERVER=p_test #
# - SYBASE_DATABASES the list of databses within the instance to backup. Format is #
# "DB1:USER:PASSWD;DB2:USER:PASSWD" The master database is added as a matter of course. If a datbase #
# called "+ALL" is used then database autodiscovery will be used and excludes the sybsyntax, #
# sybsystemdb, sybsystemprocs and tempdb databases. The password are used are passwd to isql as -P. #
# excrypted passwords are supported if NTAP_PWD_PROTECTION is set. #
# example: SYBASE_DATABASES=DBAtest2:sa/53616c7465645f5f76881b465ca8ce3745c239b60e04351e #
# example: SYBASE_DATABASES=+ALL:sa/53616c7465645f5f76881b465ca8ce3745c239b60e04351e #
# - SYBASE_DATABASES_EXCLUDE allows databases to be excluded if the +ALL construct is used use ';' #
# to allow multiple databases to be used. #
# example: SYBASE_DATABASES_EXCLUDE=pubs2;test_db1. #
# - SYBASE_TRAN_DUMP allows post snapshot sybase transaction dump to be performed. Each database #
# requiring a txn dump needs to be specified. Format is "DB1:PATH;DB2:PATH" where path is a #
# diectory. #
# example: SYBASE_TRAN_DUMP=pubs2:/sybasedumps/pubs2 #
# - SYBASE_TRAN_DUMP_FORMAT Allows the dump naming convention to use sepecified to match site #
# specific formats. Three "keys" can be sepecified, %S = Instance name from SYBASE_SERVER #
# %D is datbase from SYBASE_DATABASES and %T is a unique timestamp. Default format is %S_%D_%T.cmn #
# example: SYBASE_TRAN_DUMP_FORMAT=%S_%D_%T.log #
# - SYBASE_TRAN_DUMP_COMPRESS Allows native sybase transaction dump compression to be enabled. #
# example: SYBASE_TRAN_DUMP_COMPRESS=Y #
# - SYBASE_ISQL_CMD defines the path to the "isql" command to use. #
# example: SYBASE_ISQL_CMD=/opt/sybase/OCS-15_0/bin/isql #
# - SYBASE the location of the sybase install #
# example: SYBASE=/sybase #
# - SYBASE_LOGDIR defines the directory where snap creator logs will be placed #
# example: SYBASE_LOGDIR=/usr/local/ntap/scServer3.3.0/logs #
# - SYBASE_MANIFEST allows those databases where a manifest file should be created and the location #
# where they should be placed. Needed for database mount to be supported. #
# example: SYBASE_MANIFEST=DBAtest2:/t_inf_nzl_devs/ #
# - SYBASE_MANIFEST_FORMAT Allows the manifest naming convention to use sepecified to match site #
# specific formats. Three "keys" can be sepecified, %S = Instance name from SYBASE_SERVER #
# %D is datbase from SYBASE_DATABASES and %T is a unique timestamp which is the same as used for #
# snapshot naming. Default format is %S_%D_%T.manifest #
# example: SYBASE_MANIFEST_FORMAT=%S_%D_%T.manifest #
# - SYBASE_MANIFEST_DELETE allows the manifest to be deleted once the snapshot has been performed. The #
# manifest file should be captured within the snapshot so it is always available with the backup. #
# example: SYBASE_MANIFEST_DELETE=Y #
# #
#################################################################################################################
SYBASE_SERVER=
SYBASE_DATABASES=
SYBASE_DATABASES_EXCLUDE=
SYBASE_TRAN_DUMP=
SYBASE_TRAN_DUMP_FORMAT=
SYBASE_TRAN_DUMP_COMPRESS=Y
SYBASE_ISQL_CMD=/path/to/isql
SYBASE=/path/to/sybase
SYBASE_LOGDIR=/path/to/logs
SYBASE_MANIFEST=
SYBASE_MANIFEST_FORMAT=
SYBASE_MANIFEST_DELETE=Y
############################
### VIBE Custom Settings ###
############################
#####################################################################################################
# AUTHENTICATION PARAMETERS #
# ------------------------- #
# VIBE_VCLOUD_IPADDR - IP address(es) of the vCloud Director to log into (vCloud ONLY) #
# VIBE_VCLOUD_USER - Username to use when logging into the vCloud Director (vCloud ONLY), #
# and you must set @<org> or @system (top level vCloud database) #
# #
# Example: #
# #
# VIBE_VCLOUD_USER=administrator@system #
# #
# VIBE_VCLOUD_PASSWD - Password associated to the VCLOUD_USER specified (vCloud ONLY) #
# VIBE_VCENTER_USER - Username to use when logging into vCenter #
# VIBE_VCENTER_PASSWD - Password associated to the VCENTER_USER specified #
# #
# OBJECT PARAMETERS #
# ----------------- #
# VIBE_VCLOUD_NAMES - list of Organization, vDC and vApp object names to backup (vCloud ONLY) #
# #
# Example: #
# #
# VIBE_VCLOUD_NAMES=ORG:VDC1,VDC2:VAPP1,VAPP2;ORG2:VDC3:;ORG3::VAPP6 #
# #
# VIBE_VSPHERE_NAMES - list of Datastores and VMs to backup per vCenter (vSphere ONLY) #
# VIBE_TRIM_VSPHERE_NAMES - list of VMs to remove from backup per vCenter (vSphere ONLY) #
# #
# Example: #
# #
# VIBE_VSPHERE_NAMES=VCENTER1:DS1:VM1;VCENTER2;DS2,DS3:;VCENTER3::VM4 #
# VIBE_TRIM_VSPHERE_NAMES=VCENTER1:VM99;VCENTER2:VM5,VM12 #
# #
# RESTORE PARAMETERS #
# ------------------ #
# VIBE_RESTORE_INTERVAL - time between each restore check (default: 30 seconds) #
# VIBE_RESTORE_TIME - total time to wait for complete restore (default: 3600 seconds) #
# VIBE_VMWARE_SNAPSHOT - take a VMware snapshot during backup (default: Y) #
# #
# ALTERNATE PARAMETERS #
# -------------------- #
# VIBE_NOPING - do not ICMP ping VMware or storage controllers (default: N) #
# VIBE_DYNAMIC_VOLUMES_UPDATE - if set to 'N', does not perform dynamic volume update, which means #
# you have to set VOLUMES, SNAPVAULT_VOLUMES, SNAPMIRROR_VOLUMES and #
# NTAP_DFM_DATA_SET manually (default: not set) #
#####################################################################################################
VIBE_VCLOUD_IPADDR=
VIBE_VCLOUD_USER=
VIBE_VCLOUD_PASSWD=
VIBE_VCENTER_USER=
VIBE_VCENTER_PASSWD=
VIBE_VCLOUD_NAMES=
VIBE_VSPHERE_NAMES=
# VIBE_TRIM_VSPHERE_NAMES=
# VIBE_RESTORE_INTERVAL=30
# VIBE_RESTORE_TIME=3600
# VIBE_VMWARE_SNAPSHOT=Y
# VIBE_NOPING=Y
################################
### SnapManager SQL Settings ###
################################
##################################################################################################
# SMSQL_PS_CONF - Path to the powershell configuration file for SMSQL #
# #
# SMSQL_PS_CONF="C:\Program Files\NetApp\SnapManager for SQL Server\smsqlShell.psc1" #
# #
# SMSQL_BACKUP_OPTIONS - SMSQL backup options, Snap Creator uses a powershell new-backup cmdlet #
# #
# SMSQL_BACKUP_OPTIONS=-svr 'SQL' -d 'SQL\SHAREPOINT', '1', 'WSS_Content' -RetainBackups 7 #
# -lb -bksif -RetainSnapofSnapInfo 8 -trlog -gen -mgmt standard #
# #
# SMSQL_SERVER_NAME - SMSQL server name #
# SMSQL_32bit - (Y|N) Setting to enable use of 32bit version of powershell #
##################################################################################################
SMSQL_PS_CONF=
SMSQL_BACKUP_OPTIONS=
SMSQL_SERVER_NAME=servername
SMSQL_32bit=N
#############################
### Lotus Domino Settings ###
#############################
###################################################################################
# #
# DOMINO_DATA_PATH - Path to Domino Data directory #
# eg:/notes/notesdata #
# DOMINO_INI_PATH - Path to notes.ini file (include notes.ini in path) #
# eg:/notes/notesdata/notes.ini #
# DOMINO_CHANGE_INFO_PATH - Path where change info files should be saved. #
# Use a different volume than Domino Data or Log paths #
# eg:/notes/changeinfo #
# DOMINO_DATABASE_TYPE - Can be any of the following values #
# 0 = Backup everything ( 1+2+3 below) #
# 1 = Backup only for *.BOX files. #
# 2 = Backup only for *.NSF, *.NSG and *.NSH files. #
# 3 = Backup only for *.NTF files. #
# LOTUS - Path where Domino is installed #
# eg:/opt/ibm/lotus #
# Notes_ExecDirectory - Path that contains Domino shared object(.so or .dll) #
# files eg:/opt/ibm/lotus/notes/latest/linux/ #
# DOMINO_RESTORE_DATA_PATH - Path to restored lotus data directory #
# Use the same volume as the Domino Data path #
# Use the same path as DOMINO_DATA_PATH if restoring to #
# the same location eg:/notes/notesdata #
###################################################################################
DOMINO_DATA_PATH=
DOMINO_INI_PATH=
DOMINO_CHANGE_INFO_PATH=
DOMINO_DATABASE_TYPE=0
LOTUS=
Notes_ExecDirectory=
DOMINO_RESTORE_DATA_PATH=
############################
### PostgresSQL Settings ###
############################
#################################################################################################################
# POSTGRES_DATABASES - A list of Database(s) and the username ie: db1:user1;db2:user2 #
#
# POSTGRES_DATABASES=db1:user1;db2:user2
#
# PSQL_CMD - PATH to the psql command #
# NO_PASSWORD_OPTION - (Y|N)Option supported by PostGresSQL 8.4 and higher which doesn't require #
# PGPASSWORD to be set #
#################################################################################################################
POSTGRES_DATABASES=
PSQL_CMD=/path/to/psql
NO_PASSWORD_OPTION=Y
#######################
### MAXDB Settings ###
#######################
#################################################################################################################
# XUSER_ENABLE - (Y|N) Enables the use of an xuser for maxdb so password is not required for db user #
# HANDLE_LOGWRITER - (Y|N) Executes suspend logwriter/resume logwriter if set to Y #
# DBMCLICMD - The path to the MaxDB dbmcli command, if not set dbmcli on the search path is used #
# SQLCLICMD - The path to the MaxDB sqlcli command, if not set sqlcli on the search path is used #
# MAXDB_UPDATE_HIST_LOG - (Y|N) Tells the maxdbbackup program if it should update maxdb hostory log #
# MAXDB_DATABASES - List of database(s) and their username/password separated by a comma of which you #
# want to backup #
# #
# MAXDB_DATABASES=db1:user1/password;db2:user2/password
#
# MAXDB_CLONE_META - #
# source_sid - Database SID used on the source side #
# target_sid - Database SID used on the target side #
# target_db_path - top level directory of the MaxDB installation #
# dbm_user - Target Database Management User #
# dbm_passwd - Password or authentication key of dbm_user #
# dbadmin_user - Target SQL User with DBA privileges #
# dbadmin_passwd - Password or authentication key of dbadmin_user #
# os_user - Target operating system user #
# os_group - Primary group of os_user #
# #
# MAXDB_CLONE_META=PRD:QAS,/sapdb/QAS,dbm/secret,dbadmin/secret,sdb/sdba #
# #
# MAXDB_CLONE_ADAPT_FS - #
# target_sid - Database SID used on the target side #
# path - full qualified path, file access permissions will be changed recursively #
# wildcards are allowed #
# multiple pathes can be specified, separated by a comma #
# #
# MAXDB_CLONE_ADAPT_FS=QAS:/sapdb/QAS/sapdata* #
# #
# MAXDB_CLONE_RENAME_USER - #
# target_sid - Database SID used on the target side #
# source_username- Schema/Table owner on the source side #
# target_username- Schema/Table owner on the target side #
# Multiple source/target user can be specified, separated by a comma #
# #
# MAXDB_CLONE_RENAME_USER=QAS:sapprd/sapqas,sapprddb/sapqasdb #
# #
# MAXDB_CLONE_RESIZE_LOG - #
# target_sid - Database SID used on the target side #
# new log size - log size in pages #
# #
# MAXDB_CLONE_RESIZE_LOG=QAS:2000 #
# #
# MAXDB_SOURCE_PRESERVE_PARAM - #
# source_sid - Database SID used on the source side #
# parameter - Parameter on the source that should be enabled on the target #
# Multiple Parameters can be specified, separated by a comma #
# #
# MAXDB_SOURCE_PRESERVE_PARAM=PRD:MaxLogVolumes,MaxDataVolumes #
# #
# MAXDB_BACKUP_TEMPLATES - specifies for each database a backup template #
# template - must be an existing backup template of type external #
# #
# MAXDB_BG_SERVER_PREFIX - prefix for naming the background server #
# <prefix> - the prefix will be suffixed with the database id, if not specified, na_bg is being used #
# #
#################################################################################################################
XUSER_ENABLE=N
HANDLE_LOGWRITER=Y
DBMCLICMD=/path/to/dbmcli
SQLCLICMD=/path/to/sqlcli
MAXDB_UPDATE_HIST_LOG=Y
MAXDB_DATABASES=
##############################
### MAXDB Cloning Settings ###
##############################
MAXDB_CLONE_META=
MAXDB_CLONE_ADAPT_FS=
MAXDB_CLONE_RENAME_USER=
MAXDB_CLONE_RESIZE_LOG=
MAXDB_SOURCE_PRESERVE_PARAM=
MAXDB_BACKUP_TEMPLATES=
MAXDB_BG_SERVER_PREFIX=
############################################
### SnapCreator Agent Configuration File ###
############################################
######################################################################
### Command to allow or wildcard "*" can be used to allow anything ###
######################################################################
host: scServer@*
command:
regards,
Pinyapatthara
Hi,
In your config file you are using ARCHIVE_LOG_ONLY=N with SC 3.4.
Can you please try with ARCHIVE_LOG_ONLY=Y and let us know.
Regards
Hari
Hi ,
We tried in our test enviornment we are not able to see this error.Here is the output:-
[Mon Jun 4 20:41:47 2012] DEBUG: GMT - Tue Jun 5 00:41:47 2012
[Mon Jun 4 20:41:47 2012] DEBUG: Version: NetApp Snap Creator Framework 3.5.0
[Mon Jun 4 20:41:47 2012] DEBUG: Profile: VMRACbkpprofile
[Mon Jun 4 20:41:47 2012] DEBUG: Config Type: STANDARD
[Mon Jun 4 20:41:47 2012] DEBUG: Action: snap
[Mon Jun 4 20:41:47 2012] DEBUG: Application Plugin: oracle
[Mon Jun 4 20:41:47 2012] DEBUG: File System Plugin: null
[Mon Jun 4 20:41:47 2012] DEBUG: Policy: hourly
[Mon Jun 4 20:41:47 2012] DEBUG: Snapshot Name: Hari-hourly_20120604204147
[Mon Jun 4 20:41:47 2012] INFO: Logfile timestamp: 20120604204147
########## Parsing Environment Parameters ##########
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing VOLUMES - controller: 10.63.164.19 volume: vs1_dnfs_controlfile1
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing VOLUMES - controller: 10.63.164.19 volume: vs1_dnfs_controlfile2
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing VOLUMES - controller: 10.63.164.19 volume: vs1_dnfs_oradata1
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing VOLUMES - controller: 10.63.164.19 volume: vs1_dnfs_oradata2
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing VOLUMES - controller: 10.63.164.19 volume: vs1_dnfs_redolog1
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing NTAP_USERS - controller: 10.63.164.19 user: vsadmin
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing NTAP_SNAPSHOT_RETENTIONS - policy: hourly retention: 7
[Mon Jun 4 20:41:47 2012] DEBUG: Parsing CMODE_CLUSTER_USERS - controller: 10.61.172.246 user: admin
########## PRE APPLICATION QUIESCE COMMANDS ##########
[Mon Jun 4 20:41:47 2012] INFO: No commands defined
########## PRE APPLICATION QUIESCE COMMANDS FINISHED SUCCESSFULLY ##########
########## Application quiesce ##########
[Mon Jun 4 20:41:47 2012] [10.61.181.225:9090(3.5.0.1)] INFO: Quiescing databases
[Mon Jun 4 20:41:47 2012] [10.61.181.225:9090(3.5.0.1)] INFO: Quiescing database vmrac1
[Mon Jun 4 20:41:47 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Verifying correct version of database vmrac1
[Mon Jun 4 20:41:47 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing SQL sequence:
connect / as sysdba;
select * from v$version;
exit;
[Mon Jun 4 20:41:47 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/pcWbNhXDGf.sc] for database vmrac1
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Command [ORACLE_HOME=/orabin/app/oracle/product/11.2.0/dbhome_1;export ORACLE_HOME;ORACLE_SID=vmrac1;export ORACLE_SID;/orabin/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/tmp/pcWbNhXDGf.sc] finished with
exit code: [0]
stdout: [
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 20:41:48 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options]
stderr: []
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/pcWbNhXDGf.sc] for database vmrac1 finished successfully
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Verifying correct of database vmrac1 finished successfully
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Database vmrac1 is running Oracle 11
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Verifying RAC status for database vmrac1
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing SQL sequence:
connect / as sysdba;
show parameter CLUSTER_DATABASE;
exit;
[Mon Jun 4 20:41:50 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/xFJvrgLama.sc] for database vmrac1
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Command [ORACLE_HOME=/orabin/app/oracle/product/11.2.0/dbhome_1;export ORACLE_HOME;ORACLE_SID=vmrac1;export ORACLE_SID;/orabin/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/tmp/xFJvrgLama.sc] finished with
exit code: [0]
stdout: [
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 20:41:50 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database boolean TRUE
cluster_database_instances integer 2
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options]
stderr: []
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/xFJvrgLama.sc] for database vmrac1 finished successfully
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Database vmrac1 is configured in RAC
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Verifying RAC status for database vmrac1 finished successfully
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Verifying archive log mode of database vmrac1
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing SQL sequence:
connect / as sysdba;
select name, log_mode from sys.v$database;
exit;
[Mon Jun 4 20:41:51 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/HTq8tvYgH6.sc] for database vmrac1
[Mon Jun 4 20:41:53 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Command [ORACLE_HOME=/orabin/app/oracle/product/11.2.0/dbhome_1;export ORACLE_HOME;ORACLE_SID=vmrac1;export ORACLE_SID;/orabin/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/tmp/HTq8tvYgH6.sc] finished with
exit code: [0]
stdout: [
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 20:41:52 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
NAME LOG_MODE
--------- ------------
VMRAC ARCHIVELOG
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options]
stderr: []
[Mon Jun 4 20:41:53 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/HTq8tvYgH6.sc] for database vmrac1 finished successfully
[Mon Jun 4 20:41:53 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Verifying archive log mode of database vmrac1 finished successfully
[Mon Jun 4 20:41:53 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Creating a backup controlfile for vmrac1 to /tmp/bkpctrl/vmrac1_preBackup_trace
[Mon Jun 4 20:41:53 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing SQL sequence:
connect / as sysdba;
alter database backup controlfile to trace as '/tmp/bkpctrl/vmrac1_preBackup_trace' reuse;
exit;
[Mon Jun 4 20:41:53 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/VHlcPUi111.sc] for database vmrac1
[Mon Jun 4 20:41:55 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Command [ORACLE_HOME=/orabin/app/oracle/product/11.2.0/dbhome_1;export ORACLE_HOME;ORACLE_SID=vmrac1;export ORACLE_SID;/orabin/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/tmp/VHlcPUi111.sc] finished with
exit code: [0]
stdout: [
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 20:41:54 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
Database altered.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options]
stderr: []
[Mon Jun 4 20:41:55 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/VHlcPUi111.sc] for database vmrac1 finished successfully
[Mon Jun 4 20:41:55 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Creating a backup controlfile for vmrac1 to /tmp/bkpctrl/vmrac1_preBackup_trace finished successfully
[Mon Jun 4 20:41:55 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: ARCHIVE_LOG_ONLY option selected, only performing archive log switch
[Mon Jun 4 20:41:55 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing SQL sequence:
connect / as sysdba;
alter system archive log current;
exit;
[Mon Jun 4 20:41:55 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/3Ek1Wl_r2B.sc] for database vmrac1
[Mon Jun 4 20:42:08 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Command [ORACLE_HOME=/orabin/app/oracle/product/11.2.0/dbhome_1;export ORACLE_HOME;ORACLE_SID=vmrac1;export ORACLE_SID;/orabin/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus /nolog @/tmp/3Ek1Wl_r2B.sc] finished with
exit code: [0]
stdout: [
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 20:41:56 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected.
System altered.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options]
stderr: []
[Mon Jun 4 20:42:08 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Executing external sql script [/tmp/3Ek1Wl_r2B.sc] for database vmrac1 finished successfully
[Mon Jun 4 20:42:08 2012] [10.61.181.225:9090(3.5.0.1)] DEBUG: Archive Log only backup for database vmrac1 finished successfully
[Mon Jun 4 20:42:08 2012] [10.61.181.225:9090(3.5.0.1)] INFO: Quiescing databases finished successfully
########## POST APPLICATION QUIESCE COMMANDS ##########
[Mon Jun 4 20:42:08 2012] INFO: No commands defined
########## POST APPLICATION QUIESCE COMMANDS FINISHED SUCCESSFULLY ##########
########## PRE NETAPP COMMANDS ##########
[Mon Jun 4 20:42:08 2012] INFO: No commands defined
########## PRE NETAPP COMMANDS FINISHED SUCCESSFULLY ##########
[Mon Jun 4 20:42:08 2012] DEBUG: ZAPI REQUEST
<system-get-ontapi-version></system-get-ontapi-version>
[Mon Jun 4 20:42:08 2012] TRACE: ZAPI RESULT
<results status="passed">
<major-version>1</major-version>
<minor-version>16</minor-version>
</results>
[Mon Jun 4 20:42:08 2012] DEBUG: creating executor for storage controller 10.63.164.19
########## Detecting Data OnTap mode for 10.63.164.19 ##########
[Mon Jun 4 20:42:08 2012] DEBUG: ZAPI REQUEST
<system-get-version></system-get-version>
[Mon Jun 4 20:42:08 2012] TRACE: ZAPI RESULT
<results status="passed">
<build-timestamp>1331154238</build-timestamp>
<is-clustered>true</is-clustered>
<version>NetApp Release RollingRock__8.1.1 Cluster-Mode: Wed Mar 07 21:03:58 PST 2012</version>
<version-tuple>
<system-version-tuple>
<generation>8</generation>
<major>1</major>
<minor>1</minor>
</system-version-tuple>
</version-tuple>
</results>
[Mon Jun 4 20:42:08 2012] INFO: Data OnTap Cluster mode detected
[Mon Jun 4 20:42:08 2012] DEBUG: Connected to 10.63.164.19 using API Version 1.16
[Mon Jun 4 20:42:08 2012] DEBUG: ZAPI REQUEST
<system-get-ontapi-version></system-get-ontapi-version>
[Mon Jun 4 20:42:08 2012] TRACE: ZAPI RESULT
<results status="passed">
<major-version>1</major-version>
<minor-version>16</minor-version>
</results>
[Mon Jun 4 20:42:08 2012] DEBUG: creating executor for storage controller 10.61.172.246
########## Detecting Data OnTap mode for 10.61.172.246 ##########
[Mon Jun 4 20:42:08 2012] DEBUG: ZAPI REQUEST
<system-get-version></system-get-version>
[Mon Jun 4 20:42:08 2012] TRACE: ZAPI RESULT
<results status="passed">
<build-timestamp>1331154238</build-timestamp>
<is-clustered>true</is-clustered>
<version>NetApp Release RollingRock__8.1.1 Cluster-Mode: Wed Mar 07 21:03:58 PST 2012</version>
<version-tuple>
<system-version-tuple>
<generation>8</generation>
<major>1</major>
<minor>1</minor>
</system-version-tuple>
</version-tuple>
</results>
[Mon Jun 4 20:42:08 2012] INFO: Data OnTap Cluster mode detected
[Mon Jun 4 20:42:08 2012] DEBUG: Connected to 10.61.172.246 using API Version 1.16
[Mon Jun 4 20:42:08 2012] INFO: Discover cmode cluster nodes on 10.61.172.246
[Mon Jun 4 20:42:08 2012] DEBUG: ZAPI REQUEST
<system-node-get-iter>
<max-records>50</max-records>
</system-node-get-iter>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed">
<attributes-list>
<node-details-info>
<cpu-busytime>0</cpu-busytime>
<cpu-firmware-release>5.1.1</cpu-firmware-release>
<env-failed-fan-count>0</env-failed-fan-count>
<env-failed-fan-message>There are no failed fans.</env-failed-fan-message>
<env-failed-power-supply-count>1</env-failed-power-supply-count>
<env-failed-power-supply-message></env-failed-power-supply-message>
<env-over-temperature>true</env-over-temperature>
<is-epsilon-node>true</is-epsilon-node>
<is-node-cluster-eligible>true</is-node-cluster-eligible>
<is-node-healthy>true</is-node-healthy>
<node>TESO-01</node>
<node-location>RTP-BLD1-F3</node-location>
<node-model>FAS3270</node-model>
<node-nvram-id>1573990847</node-nvram-id>
<node-owner></node-owner>
<node-serial-number>700000658310</node-serial-number>
<node-system-id>1573990847</node-system-id>
<node-uptime>5760370</node-uptime>
<node-uuid>d900e5dc-2a6f-11e1-ae58-cd29c828fd13</node-uuid>
<node-vendor>NetApp</node-vendor>
<nvram-battery-status>battery_ok</nvram-battery-status>
<product-version>NetApp Release RollingRock__8.1.1: Wed Mar 07 21:03:58 PST 2012</product-version>
</node-details-info>
<node-details-info>
<cpu-busytime>0</cpu-busytime>
<cpu-firmware-release>5.1.1</cpu-firmware-release>
<env-failed-fan-count>0</env-failed-fan-count>
<env-failed-fan-message>There are no failed fans.</env-failed-fan-message>
<env-failed-power-supply-count>1</env-failed-power-supply-count>
<env-failed-power-supply-message></env-failed-power-supply-message>
<env-over-temperature>true</env-over-temperature>
<is-epsilon-node>false</is-epsilon-node>
<is-node-cluster-eligible>true</is-node-cluster-eligible>
<is-node-healthy>true</is-node-healthy>
<node>TESO-02</node>
<node-location>RTP-BLD1-F3</node-location>
<node-model>FAS3270</node-model>
<node-nvram-id>1573991289</node-nvram-id>
<node-owner></node-owner>
<node-serial-number>700000658322</node-serial-number>
<node-system-id>1573991289</node-system-id>
<node-uptime>5760366</node-uptime>
<node-uuid>6ce6439b-2a70-11e1-9e15-27f87b1b9d7b</node-uuid>
<node-vendor>NetApp</node-vendor>
<nvram-battery-status>battery_ok</nvram-battery-status>
<product-version>NetApp Release RollingRock__8.1.1: Wed Mar 07 21:03:58 PST 2012</product-version>
</node-details-info>
<node-details-info>
<cpu-busytime>0</cpu-busytime>
<cpu-firmware-release>5.1.1</cpu-firmware-release>
<env-failed-fan-count>0</env-failed-fan-count>
<env-failed-fan-message>There are no failed fans.</env-failed-fan-message>
<env-failed-power-supply-count>1</env-failed-power-supply-count>
<env-failed-power-supply-message></env-failed-power-supply-message>
<env-over-temperature>true</env-over-temperature>
<is-epsilon-node>false</is-epsilon-node>
<is-node-cluster-eligible>true</is-node-cluster-eligible>
<is-node-healthy>true</is-node-healthy>
<node>TESO-03</node>
<node-location>RTP-BLD1-F3</node-location>
<node-model>FAS3270</node-model>
<node-nvram-id>1573990780</node-nvram-id>
<node-owner></node-owner>
<node-serial-number>700000657990</node-serial-number>
<node-system-id>1573990780</node-system-id>
<node-uptime>5760358</node-uptime>
<node-uuid>de00947b-2a81-11e1-b202-b7f1431e8526</node-uuid>
<node-vendor>NetApp</node-vendor>
<nvram-battery-status>battery_ok</nvram-battery-status>
<product-version>NetApp Release RollingRock__8.1.1: Wed Mar 07 21:03:58 PST 2012</product-version>
</node-details-info>
<node-details-info>
<cpu-busytime>0</cpu-busytime>
<cpu-firmware-release>5.1.1</cpu-firmware-release>
<env-failed-fan-count>0</env-failed-fan-count>
<env-failed-fan-message>There are no failed fans.</env-failed-fan-message>
<env-failed-power-supply-count>1</env-failed-power-supply-count>
<env-failed-power-supply-message></env-failed-power-supply-message>
<env-over-temperature>true</env-over-temperature>
<is-epsilon-node>false</is-epsilon-node>
<is-node-cluster-eligible>true</is-node-cluster-eligible>
<is-node-healthy>true</is-node-healthy>
<node>TESO-04</node>
<node-location>RTP-BLD1-F3</node-location>
<node-model>FAS3270</node-model>
<node-nvram-id>1573991566</node-nvram-id>
<node-owner></node-owner>
<node-serial-number>700000658009</node-serial-number>
<node-system-id>1573991566</node-system-id>
<node-uptime>3484552</node-uptime>
<node-uuid>b78ba334-2a81-11e1-96a8-bf7e9e4ced11</node-uuid>
<node-vendor>NetApp</node-vendor>
<nvram-battery-status>battery_ok</nvram-battery-status>
<product-version>NetApp Release RollingRock__8.1.1: Wed Mar 07 21:03:58 PST 2012</product-version>
</node-details-info>
</attributes-list>
<num-records>4</num-records>
</results>
[Mon Jun 4 20:42:09 2012] INFO: Discover cmode cluster nodes on 10.61.172.246 completed successfully
########## Generating Info ASUP on 10.63.164.19 ##########
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<system-get-ontapi-version></system-get-ontapi-version>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed">
<major-version>1</major-version>
<minor-version>16</minor-version>
</results>
[Mon Jun 4 20:42:09 2012] DEBUG: creating executor for storage controller 10.61.172.246
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<ems-autosupport-log>
<app-version>NetApp Snap Creator Framework 3.5.0</app-version>
<auto-support>false</auto-support>
<category>Backup Started</category>
<computer-name>snapcreator.rtp.netapp.com [10.61.181.225:9090]</computer-name>
<event-description>INFO: NetApp Snap Creator Framework 3.5.0 Backup for Hari ACTION: snap POLICY: hourly Plugin: oracle - Supported Volumes: 10.63.164.19:vs1_dnfs_controlfile1,vs1_dnfs_controlfile2,vs1_dnfs_oradata1,vs1_dnfs_oradata2,vs1_dnfs_redolog1 Started</event-description>
<event-id>0</event-id>
<event-source>SNAPCREATOR</event-source>
<log-level>6</log-level>
</ems-autosupport-log>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:09 2012] INFO: NetApp ASUP create on 10.61.172.246:TESO-01 finished successfully
########## Gathering Information for 10.63.164.19:vs1_dnfs_controlfile1 ##########
[Mon Jun 4 20:42:09 2012] INFO: Performing NetApp Snapshot Inventory for vs1_dnfs_controlfile1 on 10.63.164.19
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<snapshot-list-info>
<volume>vs1_dnfs_controlfile1</volume>
<terse>true</terse>
</snapshot-list-info>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed">
<snapshots>
<snapshot-info>
<access-time>1338077701</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>2</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>21</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-05-27_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>5</percentage-of-used-blocks>
<snapshot-instance-uuid>001eaea5-a791-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>001eaea5-a791-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338483534</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>2</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>18</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531080808</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>3</percentage-of-used-blocks>
<snapshot-instance-uuid>e7cb2528-ab41-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>e7cb2528-ab41-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338523675</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>2</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>16</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531191709</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>5daea042-ab9f-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>5daea042-ab9f-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524282</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>15</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531192713</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>c7158960-aba0-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c7158960-aba0-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524830</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>13</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531193622</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>0d90503e-aba2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>0d90503e-aba2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527032</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>12</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531201304</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>2e26f706-aba7-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>2e26f706-aba7-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527284</access-time>
<busy>true</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>10</cumulative-percentage-of-used-blocks>
<dependency>busy,vclone</dependency>
<name>vmrac-hourly_20120531201718</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>c483de29-aba7-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c483de29-aba7-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338538671</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>9</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>snapmirror.b5a69961-2b2f-11e1-b71b-123478563412_10_2147484824.2012-06-01_081751</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>47f9bd57-abc2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>47f9bd57-abc2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567248</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>7</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072322</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>d0f1802e-ac04-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>d0f1802e-ac04-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567377</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>6</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072532</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>1e2c93eb-ac05-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>1e2c93eb-ac05-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338682501</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>4</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-06-03_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>29038495-ad11-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>29038495-ad11-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338768601</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>4</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-04_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a09462e2-add9-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a09462e2-add9-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338855001</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>4</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-05_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>4</percentage-of-used-blocks>
<snapshot-instance-uuid>caede03c-aea2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>caede03c-aea2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
</snapshots>
</results>
[Mon Jun 4 20:42:09 2012] INFO: NetApp Snapshot Inventory of vs1_dnfs_controlfile1 on 10.63.164.19 completed Successfully
########## Gathering Information for 10.63.164.19:vs1_dnfs_controlfile2 ##########
[Mon Jun 4 20:42:09 2012] INFO: Performing NetApp Snapshot Inventory for vs1_dnfs_controlfile2 on 10.63.164.19
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<snapshot-list-info>
<volume>vs1_dnfs_controlfile2</volume>
<terse>true</terse>
</snapshot-list-info>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed">
<snapshots>
<snapshot-info>
<access-time>1338077702</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>5</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-05-27_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>1</percentage-of-used-blocks>
<snapshot-instance-uuid>00c746ae-a791-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>00c746ae-a791-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338483479</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>4</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531080808</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>1</percentage-of-used-blocks>
<snapshot-instance-uuid>c6e83ae3-ab41-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c6e83ae3-ab41-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338523620</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>4</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531191709</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>3cdd48cb-ab9f-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>3cdd48cb-ab9f-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524226</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>4</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531192713</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a6362be2-aba0-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a6362be2-aba0-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524774</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>3</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531193622</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>ecbb6131-aba1-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>ecbb6131-aba1-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338526976</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>3</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531201304</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>0d422288-aba7-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>0d422288-aba7-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527229</access-time>
<busy>true</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>3</cumulative-percentage-of-used-blocks>
<dependency>busy,vclone</dependency>
<name>vmrac-hourly_20120531201718</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a3a07101-aba7-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a3a07101-aba7-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338538616</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>2</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>snapmirror.b5a69961-2b2f-11e1-b71b-123478563412_10_2147484826.2012-06-01_081656</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>26e6c52d-abc2-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>26e6c52d-abc2-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567193</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>2</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072322</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>b011fa7b-ac04-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>b011fa7b-ac04-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567322</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>2</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072532</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>fd3faeee-ac04-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>fd3faeee-ac04-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338682500</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-06-03_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>2894eab9-ad11-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>2894eab9-ad11-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338768600</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-04_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a038c937-add9-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a038c937-add9-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338855000</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-05_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>cab6889a-aea2-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>cab6889a-aea2-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
</snapshots>
</results>
[Mon Jun 4 20:42:09 2012] INFO: NetApp Snapshot Inventory of vs1_dnfs_controlfile2 on 10.63.164.19 completed Successfully
########## Gathering Information for 10.63.164.19:vs1_dnfs_oradata1 ##########
[Mon Jun 4 20:42:09 2012] INFO: Performing NetApp Snapshot Inventory for vs1_dnfs_oradata1 on 10.63.164.19
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<snapshot-list-info>
<volume>vs1_dnfs_oradata1</volume>
<terse>true</terse>
</snapshot-list-info>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed">
<snapshots>
<snapshot-info>
<access-time>1338077700</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>4</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>9</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-05-27_0015</name>
<percentage-of-total-blocks>3</percentage-of-total-blocks>
<percentage-of-used-blocks>7</percentage-of-used-blocks>
<snapshot-instance-uuid>ffc9a121-a790-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>ffc9a121-a790-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338483535</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>2</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531080808</name>
<percentage-of-total-blocks>1</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>e8144e9f-ab41-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>e8144e9f-ab41-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338523676</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531191709</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>5e1670fa-ab9f-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>5e1670fa-ab9f-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524282</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531192713</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>c77a6f0b-aba0-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c77a6f0b-aba0-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524830</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531193622</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>0df2e7b4-aba2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>0df2e7b4-aba2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527032</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531201304</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>2e7d241c-aba7-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>2e7d241c-aba7-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527284</access-time>
<busy>true</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency>busy,vclone</dependency>
<name>vmrac-hourly_20120531201718</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>c4d4e999-aba7-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c4d4e999-aba7-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338532371</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>snapmirror.b5a69961-2b2f-11e1-b71b-123478563412_10_2147484822.2012-06-01_063251</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>9cc8f473-abb3-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>9cc8f473-abb3-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567248</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072322</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>d14522a9-ac04-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>d14522a9-ac04-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567378</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072532</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>1e78ac8e-ac05-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>1e78ac8e-ac05-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338682500</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-06-03_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>28b1f194-ad11-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>28b1f194-ad11-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338768600</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-04_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a042038f-add9-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a042038f-add9-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338855000</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-05_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>caaeb84c-aea2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>caaeb84c-aea2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
</snapshots>
</results>
[Mon Jun 4 20:42:09 2012] INFO: NetApp Snapshot Inventory of vs1_dnfs_oradata1 on 10.63.164.19 completed Successfully
########## Gathering Information for 10.63.164.19:vs1_dnfs_oradata2 ##########
[Mon Jun 4 20:42:09 2012] INFO: Performing NetApp Snapshot Inventory for vs1_dnfs_oradata2 on 10.63.164.19
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<snapshot-list-info>
<volume>vs1_dnfs_oradata2</volume>
<terse>true</terse>
</snapshot-list-info>
[Mon Jun 4 20:42:09 2012] TRACE: ZAPI RESULT
<results status="passed">
<snapshots>
<snapshot-info>
<access-time>1338077702</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>4</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>9</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-05-27_0015</name>
<percentage-of-total-blocks>4</percentage-of-total-blocks>
<percentage-of-used-blocks>7</percentage-of-used-blocks>
<snapshot-instance-uuid>00cde0fb-a791-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>00cde0fb-a791-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338483480</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>2</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531080808</name>
<percentage-of-total-blocks>1</percentage-of-total-blocks>
<percentage-of-used-blocks>2</percentage-of-used-blocks>
<snapshot-instance-uuid>c7409171-ab41-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c7409171-ab41-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338523621</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531191709</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>3d36e87f-ab9f-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>3d36e87f-ab9f-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524118</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531192550-restore</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>65d3cab9-aba0-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>65d3cab9-aba0-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524227</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531192713</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a692a2b3-aba0-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a692a2b3-aba0-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524775</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531193622</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>ed085628-aba1-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>ed085628-aba1-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338526977</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531201304</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>0d97287c-aba7-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>0d97287c-aba7-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527229</access-time>
<busy>true</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency>busy,vclone</dependency>
<name>vmrac-hourly_20120531201718</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a3f19e75-aba7-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a3f19e75-aba7-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338532296</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>snapmirror.b5a69961-2b2f-11e1-b71b-123478563412_10_2147484823.2012-06-01_063136</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>6ffeb211-abb3-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>6ffeb211-abb3-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567193</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072322</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>b05e7c67-ac04-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>b05e7c67-ac04-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567323</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072532</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>fd852486-ac04-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>fd852486-ac04-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338682500</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-06-03_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>289c2194-ad11-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>289c2194-ad11-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338768600</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-04_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a043b1c6-add9-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a043b1c6-add9-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338855000</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>0</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-05_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>cac364a5-aea2-11e1-a5fb-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>cac364a5-aea2-11e1-a5fb-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
</snapshots>
</results>
[Mon Jun 4 20:42:09 2012] INFO: NetApp Snapshot Inventory of vs1_dnfs_oradata2 on 10.63.164.19 completed Successfully
########## Gathering Information for 10.63.164.19:vs1_dnfs_redolog1 ##########
[Mon Jun 4 20:42:09 2012] INFO: Performing NetApp Snapshot Inventory for vs1_dnfs_redolog1 on 10.63.164.19
[Mon Jun 4 20:42:09 2012] DEBUG: ZAPI REQUEST
<snapshot-list-info>
<volume>vs1_dnfs_redolog1</volume>
<terse>true</terse>
</snapshot-list-info>
[Mon Jun 4 20:42:10 2012] TRACE: ZAPI RESULT
<results status="passed">
<snapshots>
<snapshot-info>
<access-time>1338077700</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>10</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>66</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-05-27_0015</name>
<percentage-of-total-blocks>3</percentage-of-total-blocks>
<percentage-of-used-blocks>40</percentage-of-used-blocks>
<snapshot-instance-uuid>ffdc7699-a790-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>ffdc7699-a790-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338483535</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>6</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>56</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531080808</name>
<percentage-of-total-blocks>3</percentage-of-total-blocks>
<percentage-of-used-blocks>37</percentage-of-used-blocks>
<snapshot-instance-uuid>e86efe48-ab41-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>e86efe48-ab41-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338523676</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>3</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>41</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531191709</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>5e6770ca-ab9f-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>5e6770ca-ab9f-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524283</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>3</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>41</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531192713</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>c7bf972e-aba0-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c7bf972e-aba0-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338524830</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>3</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>41</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531193622</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>3</percentage-of-used-blocks>
<snapshot-instance-uuid>0e3c0b78-aba2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>0e3c0b78-aba2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527033</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>3</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>40</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120531201304</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>2ecad58c-aba7-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>2ecad58c-aba7-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338527285</access-time>
<busy>true</busy>
<cumulative-percentage-of-total-blocks>3</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>40</cumulative-percentage-of-used-blocks>
<dependency>busy,vclone</dependency>
<name>vmrac-hourly_20120531201718</name>
<percentage-of-total-blocks>2</percentage-of-total-blocks>
<percentage-of-used-blocks>33</percentage-of-used-blocks>
<snapshot-instance-uuid>c52ec83a-aba7-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>c52ec83a-aba7-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338538671</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>1</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>15</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>snapmirror.b5a69961-2b2f-11e1-b71b-123478563412_10_2147484827.2012-06-01_081751</name>
<percentage-of-total-blocks>1</percentage-of-total-blocks>
<percentage-of-used-blocks>13</percentage-of-used-blocks>
<snapshot-instance-uuid>480f72cb-abc2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>480f72cb-abc2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567249</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072322</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>d194da14-ac04-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>d194da14-ac04-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338567378</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>vmrac-hourly_20120601072532</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>1eb69928-ac05-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>1eb69928-ac05-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338682500</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>weekly.2012-06-03_0015</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>28c53feb-ad11-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>28c53feb-ad11-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338768600</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-04_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>0</percentage-of-used-blocks>
<snapshot-instance-uuid>a056f1ee-add9-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>a056f1ee-add9-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
<snapshot-info>
<access-time>1338855000</access-time>
<busy>false</busy>
<cumulative-percentage-of-total-blocks>0</cumulative-percentage-of-total-blocks>
<cumulative-percentage-of-used-blocks>1</cumulative-percentage-of-used-blocks>
<dependency></dependency>
<name>daily.2012-06-05_0010</name>
<percentage-of-total-blocks>0</percentage-of-total-blocks>
<percentage-of-used-blocks>1</percentage-of-used-blocks>
<snapshot-instance-uuid>cac022cd-aea2-11e1-9600-123478563412</snapshot-instance-uuid>
<snapshot-version-uuid>cac022cd-aea2-11e1-9600-123478563412</snapshot-version-uuid>
<vserver>vs1_dnfs_rac</vserver>
</snapshot-info>
</snapshots>
</results>
[Mon Jun 4 20:42:10 2012] INFO: NetApp Snapshot Inventory of vs1_dnfs_redolog1 on 10.63.164.19 completed Successfully
########## Running NetApp Snapshot Rename on Primary 10.63.164.19 ##########
[Mon Jun 4 20:42:10 2012] INFO: Hari-hourly_20120604204147 is the first snapshot taken for 10.63.164.19:vs1_dnfs_controlfile1, Skipping!
[Mon Jun 4 20:42:10 2012] INFO: Hari-hourly_20120604204147 is the first snapshot taken for 10.63.164.19:vs1_dnfs_controlfile2, Skipping!
[Mon Jun 4 20:42:10 2012] INFO: Hari-hourly_20120604204147 is the first snapshot taken for 10.63.164.19:vs1_dnfs_oradata1, Skipping!
[Mon Jun 4 20:42:10 2012] INFO: Hari-hourly_20120604204147 is the first snapshot taken for 10.63.164.19:vs1_dnfs_oradata2, Skipping!
[Mon Jun 4 20:42:10 2012] INFO: Hari-hourly_20120604204147 is the first snapshot taken for 10.63.164.19:vs1_dnfs_redolog1, Skipping!
########## Creating snapshot(s) ##########
[Mon Jun 4 20:42:10 2012] INFO: NetApp Snap Creator Framework 3.5.0 detected that SnapDrive is not being used. File system consistency cannot be guaranteed for SAN/iSAN environments
########## Taking Snapshot on Primary 10.63.164.19:vs1_dnfs_controlfile1 ##########
[Mon Jun 4 20:42:10 2012] INFO: Creating NetApp Snapshot for vs1_dnfs_controlfile1 on 10.63.164.19
[Mon Jun 4 20:42:10 2012] DEBUG: ZAPI REQUEST
<snapshot-create>
<snapshot>Hari-hourly_20120604204147</snapshot>
<volume>vs1_dnfs_controlfile1</volume>
</snapshot-create>
[Mon Jun 4 20:42:10 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:10 2012] INFO: NetApp Snapshot Create of Hari-hourly_20120604204147 on 10.63.164.19:vs1_dnfs_controlfile1 Completed Successfully
########## Taking Snapshot on Primary 10.63.164.19:vs1_dnfs_controlfile2 ##########
[Mon Jun 4 20:42:10 2012] INFO: Creating NetApp Snapshot for vs1_dnfs_controlfile2 on 10.63.164.19
[Mon Jun 4 20:42:10 2012] DEBUG: ZAPI REQUEST
<snapshot-create>
<snapshot>Hari-hourly_20120604204147</snapshot>
<volume>vs1_dnfs_controlfile2</volume>
</snapshot-create>
[Mon Jun 4 20:42:10 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:10 2012] INFO: NetApp Snapshot Create of Hari-hourly_20120604204147 on 10.63.164.19:vs1_dnfs_controlfile2 Completed Successfully
########## Taking Snapshot on Primary 10.63.164.19:vs1_dnfs_oradata1 ##########
[Mon Jun 4 20:42:10 2012] INFO: Creating NetApp Snapshot for vs1_dnfs_oradata1 on 10.63.164.19
[Mon Jun 4 20:42:10 2012] DEBUG: ZAPI REQUEST
<snapshot-create>
<snapshot>Hari-hourly_20120604204147</snapshot>
<volume>vs1_dnfs_oradata1</volume>
</snapshot-create>
[Mon Jun 4 20:42:11 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:11 2012] INFO: NetApp Snapshot Create of Hari-hourly_20120604204147 on 10.63.164.19:vs1_dnfs_oradata1 Completed Successfully
########## Taking Snapshot on Primary 10.63.164.19:vs1_dnfs_oradata2 ##########
[Mon Jun 4 20:42:11 2012] INFO: Creating NetApp Snapshot for vs1_dnfs_oradata2 on 10.63.164.19
[Mon Jun 4 20:42:11 2012] DEBUG: ZAPI REQUEST
<snapshot-create>
<snapshot>Hari-hourly_20120604204147</snapshot>
<volume>vs1_dnfs_oradata2</volume>
</snapshot-create>
[Mon Jun 4 20:42:11 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:11 2012] INFO: NetApp Snapshot Create of Hari-hourly_20120604204147 on 10.63.164.19:vs1_dnfs_oradata2 Completed Successfully
########## Taking Snapshot on Primary 10.63.164.19:vs1_dnfs_redolog1 ##########
[Mon Jun 4 20:42:11 2012] INFO: Creating NetApp Snapshot for vs1_dnfs_redolog1 on 10.63.164.19
[Mon Jun 4 20:42:11 2012] DEBUG: ZAPI REQUEST
<snapshot-create>
<snapshot>Hari-hourly_20120604204147</snapshot>
<volume>vs1_dnfs_redolog1</volume>
</snapshot-create>
[Mon Jun 4 20:42:11 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:11 2012] INFO: NetApp Snapshot Create of Hari-hourly_20120604204147 on 10.63.164.19:vs1_dnfs_redolog1 Completed Successfully
########## PRE APPLICATION UNQUIESCE COMMANDS ##########
[Mon Jun 4 20:42:11 2012] INFO: No commands defined
########## PRE APPLICATION UNQUIESCE COMMANDS FINISHED SUCCESSFULLY ##########
########## Application unquiesce ##########
########## POST APPLICATION UNQUIESCE COMMANDS ##########
[Mon Jun 4 20:42:11 2012] INFO: No commands defined
########## POST APPLICATION UNQUIESCE COMMANDS FINISHED SUCCESSFULLY ##########
########## Generating Info ASUP on 10.63.164.19 ##########
[Mon Jun 4 20:42:11 2012] DEBUG: ZAPI REQUEST
<system-get-ontapi-version></system-get-ontapi-version>
[Mon Jun 4 20:42:11 2012] TRACE: ZAPI RESULT
<results status="passed">
<major-version>1</major-version>
<minor-version>16</minor-version>
</results>
[Mon Jun 4 20:42:11 2012] DEBUG: creating executor for storage controller 10.61.172.246
[Mon Jun 4 20:42:11 2012] DEBUG: ZAPI REQUEST
<ems-autosupport-log>
<app-version>NetApp Snap Creator Framework 3.5.0</app-version>
<auto-support>false</auto-support>
<category>Backup Completed</category>
<computer-name>snapcreator.rtp.netapp.com [10.61.181.225:9090]</computer-name>
<event-description>INFO: NetApp Snap Creator Framework 3.5.0 Backup for Hari ACTION: snap POLICY: hourly Plugin: oracle - Supported Volumes: 10.63.164.19:vs1_dnfs_controlfile1,vs1_dnfs_controlfile2,vs1_dnfs_oradata1,vs1_dnfs_oradata2,vs1_dnfs_redolog1 Completed</event-description>
<event-id>0</event-id>
<event-source>SNAPCREATOR</event-source>
<log-level>6</log-level>
</ems-autosupport-log>
[Mon Jun 4 20:42:11 2012] TRACE: ZAPI RESULT
<results status="passed"></results>
[Mon Jun 4 20:42:11 2012] INFO: NetApp ASUP create on 10.61.172.246:TESO-01 finished successfully
########## POST NETAPP DATA TRANSFER COMMANDS ##########
[Mon Jun 4 20:42:11 2012] INFO: No commands defined
########## POST NETAPP DATA TRANSFER COMMANDS FINISHED SUCCESSFULLY ##########
########## Running NetApp Snapshot Delete on Primary 10.63.164.19 ##########
########## POST NETAPP COMMANDS ##########
[Mon Jun 4 20:42:26 2012] INFO: No commands defined
########## POST NETAPP COMMANDS FINISHED SUCCESSFULLY ##########
########## ARCHIVE COMMANDS ##########
[Mon Jun 4 20:42:26 2012] INFO: No commands defined
########## ARCHIVE COMMANDS FINISHED SUCCESSFULLY ##########
########## NetApp Snap Creator Framework 3.5.0 finished successfully ##########
[Mon Jun 4 20:42:26 2012] INFO: INFO: Snap Creator finished successfully ( Action: snap )
[Mon Jun 4 20:42:26 2012] DEBUG: Exiting with error code - 0
Config looks ok except these settings:
SC_AGENT=hostname:9090
SC_AGENT_TIMEOUT=
You must set SC_AGENT_TIMEOUT should be 60
SC_AGENT_TIMEOUT=60
In addition I assume hostname is just a place filler for the host running the agent and you put hostname so you wouldnt have to share this sensitive information?
Keith
HI Keith
I can quiesce database.I don't use oracle plug in . I modify script follow
APP_NAME=
and increase APP_QUIESCE_CMD<#> ,APP_UNQUIESCE_CMD<#>
then test quiesce again it success.
Thank you very much for support me.
Pinyapatthara