I see there is functionality to integrate SnapCreator for backup to tape... how would you use/configure SnapCreator to use NDMP with NetBackup? Do you happen to have any specific examples of SnapCreator integration with NDMP via NetBackup?
POST_NTAP_CMD01=/usr/openv/netbackup/bin/bpbackup -i -p test -s %USER_DEFINED -S sunserver1
Here SC would run NetBackup CLI after finishing all snapshot tasks. We use the --user_defined cli parameter to pass in the NetApp backup schedule, since we could have different schedules. You dont need to use user_defined but I think it helps with this type of integration.
Example of SC call with user_defined
./snapcreator --profile <profile> --action snap --policy daily --user_defined full --verbose
Another idea is you could make the SC policy's match the NetBackup schedules then you could do something like this
POST_NTAP_CMD01=/usr/openv/netbackup/bin/bpbackup -i -p test -s %SNAP_TYPE -S sunserver1
Here the -s would be daily for example. I think this is a great solution as SC will run and wait for NetBackup to complete so SC will know if NetApp backup exited with 0 or non 0 so you can now monitor things end-to-end.
Let us know how you implement this, lots of folks ask about this and there are lots of different things you can do
You probably should install the SC server on the media or master server so it can run the bpbackup commands. Keep in mind if you use the SC_AGENT then the POST_NTAP_CMD and all CMDs for that matter run where the agent is running. We are working on being able to specify agent or server but it isnt there yet.
I already have a dedicated systemin this environment for SnapCreator... re-installing the framework on the NBMaster server may not be an option; however, since the databases are all onseparate dedicated servers aleardy it would still have the same issue with the agent (from what I understand) as it is used to quiesce the database already, as the agent is installed on each dedicated DB server (i.e. container to be specific)
Since the POST commands are executed on the server that is running the agent (i.e. the DB server), would it make more sense to create a script on the DB server that will 'ssh' backto the master and execute the backup? In this case, the POST commandwould be a script called something like '/usr/local/adm/runNDMP.sh'. Basically, this is how I would see the flow:
Create unique backup profile for NDMP on SnapCreator server
Create a unique SnapShotname for this profile just for NDMP, i.e. NDMP_recent
Create script on DB server that will ssh back to master and initiate NDMP backup
On SC server for the NDMP backup profile set POST_NTAP_CMD01=/usr/local/adm/runNDMP.sh
Create schedule to runonce weekly for this profile via SnapCreator
When/if executing a script via aPOST command, if the script fails, does SnapCreator trap the return code from the script, or regardless if the script ends success or fail, does SC see the fact that the script was able to execute as success? I could easilty set the exit code to "0" or "1" with the script based on job exit status too...
This sounds like a great plan. I like the idea of a script being used to ssh back to master server which is triggered from agent. I think that is best solution based on requirements.
What we are going to try and do for future release of SC is allow you to use another agent for CMDs. We sort of do that now, the SC_CLONE_TARGET is a second agent but it only applies to cloning so --action clone_vol and only the MOUNT_CMDS, UMOUNT_CMDS, PRE_CLONE_CREATE_CMDS, and POST_CLONE_CREATE_CMDS support this.
So you could have 1 agent for DB integration, 1 agent for CMDs, and 1 agent for Cloning. Do you have any ideas how to make CMDs a bit more granular so they arent bound to either SC_AGENT or scServer if no agent is provided?
The backup integration we are hoping to wrap up within the next two weeks and will update you once done how all of the components fit together for this for the final solution (or as I run into issues of course I will most likely be posting more questions...).
When I have not had the luxury of agents in the past for local and remote server execution I kept it simple and leveraged ssh using a dedicated 'master' server in the environment and built a parent-child relationship between the master and the clients servers. Basically the same concept as SnapCreator where I may have mutliple config files and/or unique scripts to collect data, initiate SnapMirror updates and backup flows, basically run/manage systems from a central console. Obvioulsy this has worked great in UNIX environments, with Windows I think the agent integration will be far more easy for admins to adopt and manage. Not sure if this is really what your were asking...