We want to use the snapcreator for backup the oracle RAC.
But duing the test, we find the oracle plugins in the snapcreator cannot support the multiple nodes.
Is it possible to enhance that to fit for the oracle RAC.
For example, we give the list of nodes in the plugin parameters, then plugin can connect each of them to find out the right instance.
Most customers using RAC set up SnapCreator to act on a single database. The plugin can only act on a single instance, which means a cold backup cannot be done. It can, however, do a hot backup. This does not affect recoverability. You can place a database in hot backup mode from any instance in the RAC cluster.
This does mean if a particular instance chosen for SnapCreator is down, the backups will not succeed. The config file would need to be changed to point at a different instance on a different agent.
Let me give you an example:
NodeA is runing a DB AAA, and NodeB is running a DB BBB.
NodeA and NodeB are in the same cluster.
We take the NodeA as the snapcreator agent for the DB AAA.
In the normal case, it is totally fine.
But when there is a crash/hostdown on the NodeA, the DB AAA will be switched to the NodeB.
Then backup for the AAA will be failed.
So our question is that is it possible to add both nodeA and NodeB to the AAA's configuration.
Each time the server can go through the node list to find out the right node to perform the backup.
The main limitation is a config file points to one and only one agent. That prevents you from placing both nodes in a single config file. The config file is also coded with an instance name, not a database name, so if the target instance is down the config file would no longer work.
I have an idea on how to address this. I can't commit to any timeline, but I would like your opinion on thi
By listening on a VIP or a SCAN, we can ensure that only one config file is needed. If there is a node failure, SC will still be able to contact the agent because the SCAN/VIP will move to a different server.
You could actually make RAC work with a VIP/SCAN now without any changes to SC, but it’s not a good workaround. Let’s say you have a 4-node RAC database. When the plugin ran, it would connect to an agent and only one of the 4 SID’s you specified would succeed because the other 3 SIDs are on a different server. To compensate, you’d have to set the IGNORE_ERROR (or something like that ) parameters. That makes it difficult to know if your nightly backups truly succeeded.
The point of the VERIFY_LOCAL_SID flag would be to avoid the need to suppress the errors. When that parameter was set, the plugin could consult the /etc/oratab file to see which SIDs actually exist locally, and the plugin would only attempt to perform the backup on that SID.
Possible limitations include:
What do you think?
Thanks for the prompt reply.
I have tried to use a SCAN or a VIP as the agent to connect with for single DB.
And also installed agent on all nodes.
But the SC server connect to the right node by chance.
I think it is hard for it to locate the right node with the right DB.
That is the behavior right now. The changes I would propose would introduce a new config file option that helps the Plugin figure out which instance name is running locally. For example, a config file for database NTAP would require a config file option specifying instances of NTAP1, NTAP2, and NTAP3. When the backup runs, the plugin would consult the /etc/oratab file and figure out which instances are running locally. It could also look at the PMON processing running locally and get the data that way. It's also possible a simple srvctl command would work, but then the plug would need to know grid home as well.
It's really just a question of name resolution. The plugin needs to accept a db_unique_name and then figure out which instance is available on the node where the plugin is executing.
Our backup is based on the DB, one DB is for one customer.
So in one cluster, there may be 2 or more running DBs spread on different nodes.
Our request here is quite simple, to use the snapcreator to locate the desired DB for performing a backup.
I think the way you proposed is based on how to successfully backup the DBs on the nodes.
Correct me if I am wrong.