Data Backup and Recovery

SMO re-using old archive log locations - FAILURE

AKATZKAC
4,992 Views

Having a goofy issue with SMO 3.1 that I wonder if anyone else has come across.

We run Oracle's EBS 11i running 10gR2 (10.2.0.2) on Oracle Enterprise Linux 5u5. We're upgrading to R12.1.3/11.2.0.2 and migrating to a NetApp FAS2020 for storage.

SMO Versions:

smo_server v3.1

snapdrived for unix 4.2

Initial creation of a profile and snaps/clones, etc worked great...love the tool, as does my DBA.

In testing different backup scenarios, I induced a failure by locating one of three archive log locations on a local disk not under snapdrive management, which failed to snap and/or clone--as expected because snapdrive can't connect to a local drive (in this case /arch3).  I set the database back to use the original archive log location, which is a NetApp NFS-mount (/test_oraarch), started DB, switched log files until a new arc log was written and DB is happy and running fine.

The problem comes when I try to either create a new backup or a new profile of this database...all end in failure.

Examination of the smo log (/var/log/smo/smo_of*.log) shows the problem: during profile creation (and apparently during backup creation as well), SMO runs a SQL query directly against the DB:

SELECT NAME FROM V$ARCHIVED_LOG WHERE NAME IS NOT NULL AND ARCHIVED = 'YES'

This returns all current AND PAST locations of archive logs. It doesn't seem to matter that the control files, the init.ora/spfile and sql command "show archive list" only show current, NetApp-usable mounts and that DB manual recovery is not dependent on where the archive logs USED TO BE.

I tried to change smo.config entry: archivedLogs.exclude=/arch3/*, restart the smo_server process and reconnect via GUI...no joy--still get the same error (FLOW-11008: Operation failed: DISC-00003: Filesystem / is not on snapshot-capable storage.)

So my last hope is to try a manual table update against the v$archived_log table to remove any entries pointing to the non-NetApp/snapdrive connectable mounts so that SMO would try to enumerate them during the Profile or backup creation process.

Wish me luck, as even with my short, 4-year experience with Oracle I know manual table updates are risky and totally frowned upon by Oracle support...I could lose the database with one careless keystoke.

Anyway, if anyone's had or heard of this issue and has another workaround, I'm all ears. Not exactly sure why SMO doesn't just use the current archive log location for a new profile.

Andy

5 REPLIES 5

AKATZKAC
4,992 Views

Update on this issue:

OK, so cruising metalink and google shows that you can't manually update the v$(X) tables.

Luckily, this is a testing instance and we don't need to restore from any backups/archive logs, so I did:

- create controlfile to trace

- edit the 'traced' control file Setup 2 (RESETLOGS version), putting in my current file locations and archivelog settings

- brought the db and listeners down

- removed my existing control files from their regular location

- logged into db as sysdba

- took a deep breath and ran the edited file, which basically created a new incarnation of the database--had this failed, I would be minus one database, meaning a 5-hour clone from physical again...

- alter system switch logfile; x 4 (I have 4 in my redo group) and saw that it created new .arc files in the /oralog location (managed by NetApp)

- before exiting, I queried the database as SMO does:

SELECT NAME FROM V$ARCHIVED_LOG WHERE NAME IS NOT NULL AND ARCHIVED = 'YES'

which returned 8 rows, all of them the new log files I made in the NetApp mounts...all good

- In SMO, I was able to create a new profile. Next is to create my backup schedule, etc, but think that will go fine, now that it's not looking for archive logs in all the wrong places.

Anyone else have this issue and is this something that can be better managed in the SMO interface (either GUI or CLI), i.e., allowing user manipulation of what archive log locations to use and which ones to ignore?

Andy

jakub_wartak
4,992 Views

You need to clear those old archived redo log files from Oracle control files. V$ARCHIVE_LOG is nothing more than quering current control file... so you do something like that:

0) rm -f /arch3_on_local_disk_drive/*.dbf

1) rman target /

2) crosscheck archivelog all;

3) delete expired archivelog all;

4) exit

And re-try your SMO backup, be advised however that your current controlfile will be prunned and this can have odd consequences (be sure to the re-think if you are using e.g. DataGuard/Streams/other RMAN backups, etc).

-J.

AKATZKAC
4,992 Views

J.-

Your answer is in good company, as my Googling and Metalink searching gave the same solution so that may be a valid method for clearing out that v$archived_log file--however, we're not using RMAN here so it's not applicable in our case. Also, your suggestion that we needed to clear those old archive log locations from the control files: those locations are NOT in my current control files, as evidenced by my "create control file to trace" command, which gives you a vi-readable copy of a control file...my old /arch3 archive log location was not in there. SMO is getting that info from the v$archived_log table inside the database, not from the control file.

Which actually furthers my point that SMO should have some user setting as to what archive logs to keep track of and which ones can be ignored. Either that or when creating a Profile or new backup, use the existing control file or init.ora file for archive log locations, not a table in the database where archive logs MAY have once been stored.

jcosta
4,992 Views

not sure if you have access to this thread:

http://communities.netapp.com/message/42948#42948

Removing entries in v$archived_log referencing a particluar DEST_ID [ID 845361.1]

SQL>execute sys.dbms_backup_restore.resetCfileSection( 11);

This will clear all files in v$archived_log;

AKATZKAC
4,992 Views

jcosta: that's very helpful...couldn't get to the community thread but found the doc id on metalink--thanks.

While it's a better method to clearing out unwanted log entries, it's still not the best solution to this problem as it wipes out the whole log. I believe running that command would mean the database is completely unrecoverable to a point-in-time in the past in case of a crash. The only option would be a media recovery, and we'd have to be satisfied with whatever state the data was in when that backup tape was created; any backed up archive logs and control files would be of no help.

And apologies to poster jakub.wartak: the metalink doc does indicate that those entries are indeed embedded in the control files...I didn't see them in my recreated control file, and in my ignorance thought I was looking at an actual, working copy of a control file.

Andy

Public