Data Backup and Recovery

Request the oracle plugins can support the ORACLE RAC

ondemandinf
4,664 Views

Hi Team,

We want to use the snapcreator for backup the oracle RAC.

But duing the test, we find the oracle plugins in the snapcreator cannot support the multiple nodes.

Is it possible to enhance that to fit for the oracle RAC.

For example, we  give the list of nodes in the plugin parameters, then plugin can connect each of them to find out the right instance.

BR.

James

8 REPLIES 8

praveenp
4,664 Views

Hi,

When you say "nodes" do you mean DBs? If os, we do support backup of multiple DBs in RAC. We need to seperate the DB names by ";" and not ","

Hope this helps.

Thanks,

Praveena P

steiner
4,664 Views

Most customers using RAC set up SnapCreator to act on a single database. The plugin can only act on a single instance, which means a cold backup cannot be done. It can, however, do a hot backup. This does not affect recoverability. You can place a database in hot backup mode from any instance in the RAC cluster.

This does mean if a particular instance chosen for SnapCreator is down, the backups will not succeed. The config file would need to be changed to point at a different instance on a different agent.

ondemandinf
4,664 Views

Hi Steiner,

Let me give you an example:

NodeA is runing a DB AAA, and NodeB is running a DB  BBB.

NodeA and NodeB are in the same cluster.

We take the NodeA as the snapcreator agent for the DB AAA.

In the normal case, it is totally fine.

But when there is a crash/hostdown on the NodeA, the DB AAA will be switched to the NodeB.

Then backup for the AAA will be failed.

So our question is that is it possible to add both nodeA and NodeB to the AAA's configuration.

Each time the server can go through the node list to find out the right node to perform the backup.

BR.

James

steiner
4,664 Views

The main limitation is a config file points to one and only one agent. That prevents you from placing both nodes in a single config file. The config file is also coded with an instance name, not a database name, so if the target instance is down the config file would no longer work.

I have an idea on how to address this. I can't commit to any timeline, but I would like your opinion on thi

  1. We could establish a new config file parameter called “VERIFY_LOCAL_SID” or something like that.
  2. You would then populate ORACLE_DATABASES with every SID in the RAC database that is a permissible backup target.
  3. The agent would need to be installed on all nodes in the cluster.
  4. The agent would have to be listening and accessible on a VIP or a SCAN.
  5. There would need to have the local SID for a given database listed in the /etc/oratab file. Not all DBA’s do that.
  6. Only hot backups would be possible. Cold backups would require a much larger effort

By listening on a VIP or a SCAN, we can ensure that only one config file is needed. If there is a node failure, SC will still be able to contact the agent because the SCAN/VIP will move to a different server.

You could actually make RAC work with a VIP/SCAN now without any changes to SC, but it’s not a good workaround. Let’s say you have a 4-node RAC database. When the plugin ran, it would connect to an agent and only one of the 4 SID’s you specified would succeed because the other 3 SIDs are on a different server. To compensate, you’d have to set the IGNORE_ERROR (or something like that ) parameters. That makes it difficult to know if your nightly backups truly succeeded.

The point of the VERIFY_LOCAL_SID flag would be to avoid the need to suppress the errors. When that parameter was set, the plugin could consult the /etc/oratab file to see which SIDs actually exist locally, and the plugin would only attempt to perform the backup on that SID.


Possible limitations include:

  1. There could be a short period during RAC failovers where the SCAN/VIP is present on a server with no running instance. The backup would fail in this case.
  2. Not all network configurations would allow SC to access the SCAN/VIP. We can’t avoid this requirement without a major rework of the SC framework. Right now, a config file operations on one and only one IP address. If that IP address is the local IP rather than the VIP/SID then the operation will fail if that node fails.

What do you think?

ondemandinf
4,664 Views

Hi Steiner,

Thanks for the prompt reply.

I have tried to use a SCAN or a VIP as the agent to connect with for single DB.

And also installed agent on all nodes.

But the SC server connect to the right node by chance.

I think it is hard for it to locate the right node with the right DB.

BR.

James

steiner
4,664 Views

That is the behavior right now. The changes I would propose would introduce a new config file option that helps the Plugin figure out which instance name is running locally. For example, a config file for database NTAP would require a config file option specifying instances of NTAP1, NTAP2, and NTAP3. When the backup runs, the plugin would consult the /etc/oratab file and figure out which instances are running locally. It could also look at the PMON processing running locally and get the data that way. It's also possible a simple srvctl command would work, but then the plug would need to know grid home as well.

It's really just a question of name resolution. The plugin needs to accept a db_unique_name and then figure out which instance is available on the node where the plugin is executing.

ondemandinf
4,664 Views

Hi Steiner,

Our backup is based on the DB, one DB is for one customer.

So in one cluster, there may be 2 or more running DBs spread on different nodes.

Our request here is quite simple, to use the snapcreator to locate the desired DB for performing a backup.

I think the way you proposed is based on how to successfully backup the DBs on the nodes.

Correct me if I am wrong.

BR.

James

steiner
4,663 Views

Which version of SnapCreator are you using now? Are you using version 4 with the client-server architecture?

Public