Data Backup and Recovery
Data Backup and Recovery
Hi everybody!
this is about SMO 3.0.2 backing up Oracle 11g ASM (OS is LINUX 5.3) SMO backup completes successfully without RMAN. With RMAN though (either control file or recovery catalog) it fails, and this seems to be because it has to connect to the snapshot.
To be more specific SMO creates the snapshot successfully and connects to it creating a device in the form of /dev/mapper/mpathXX. But then it tries to chown another device name (/dev/mapper/mapperXX), which does not exist and it fails.
Does anybody know why SMO picks the wrong device name? is this related to SDU or MPIO , probably not configured correctly ?
Thank you in advance!
Marditsa
Solved! See The Solution
user_friendly_names yes
This is not supported by SDU
Change to no (please comment the alias as well)
Restart multipath and sdu
snapdrive storage show -dev needs to show /dev/mapper/123455 (hash id)
Pls let me know
neto
Hi Marditsa
This is neto from Brazil
How are you?
Could you please send the output from:
snapdrive storage show -dev
sanlun lun show
cat /etc/multipath.conf
TIA
All the best
neto
Hi Neto! thank you for your prompt responce ! I have the following stored in my laptop. In the morning (local time 🙂 will be able to access the system and will send you additional info.
snapdrive storage show -dev (after it connects to the snapshot lun)
Connected LUNs and devices:
device filename adapter path size proto state clone lun path backing snapshot
---------------- ------- ---- ---- ----- ----- ----- -------- ----------------
/dev/mapper/ora_asm_data11g - P 20g fcp online No kouvas2:/vol/loulaDATA/lun0 -
/dev/mapper/ora_asm_fra11g - P 30g fcp online No kouvas2:/vol/loulaFRA/lun0 -
/dev/mapper/ora_ocr - P 2g fcp online No kouvas2:/vol/loulaOCR/lun0 -
/dev/mapper/oraclehome - P 35.0g fcp online No kouvas2:/vol/artziu01/lun0 -
/dev/mapper/ora_voting - P 2g fcp online No kouvas2:/vol/loulaVOTING/lun0 -
/dev/mapper/mpath43 - P 22g fcp online lun-clone kouvas2:/vol/loulaFRA10g/lun0_20091116163352733_0 .snapshot/smo_dekag_dekag1_f20091116smo__h_1_4028825224fd692a0124fd692d510001_0/lun0
/dev/mapper/ora_asm_data10g - P 22g fcp online No kouvas2:/vol/loulaFRA10g/lun0 -
/dev/mapper/ora_asm_fra10g - P 22g fcp online No kouvas2:/vol/loulaDATA10g/lun0
multipath.conf :
defaults {
user_friendly_names yes
max_fds max
queue_without_daemon no
}
blacklist {
wwid <DevId>
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^sda$"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
devices {
device {
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
features "1 queue_if_no_path"
hardware_handler "0"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
rr_min_io 128
path_checker directio
flush_on_last_del yes
}
}
multipaths {
multipath {
wwid 360a98000486e5463644a536c43666d67
alias oraclehome
}
multipath {
wwid 360a98000486e5463644a536c43744337
alias ora_asm_data11g
}
multipath {
wwid 360a98000486e5463644a536c4375584a
alias ora_asm_fra11g
}
multipath {
wwid 360a98000486e5463644a536c43776a37
alias ora_voting
}
multipath {
wwid 360a98000486e5463644a536c4376704c
alias ora_ocr
}
multipath {
wwid 360a98000486e5463644a536f77584764
alias ora_asm_data10g
}
multipath {
wwid 360a98000486e5463644a536f77564975
alias ora_asm_fra10g
}
multipath {
wwid 360a98000486e5463644a542d36356b55
alias ora_asm_data_testdb
}
multipath {
wwid 360a98000486e5463644a542d36385832
alias ora_asm_fra_testdb
}
multipath {
wwid 360a98000486e5463644a542d362d5634
alias ora_asm_onlinelog_testdb_dest1
}
multipath {
wwid 360a98000486e5463644a542d36397553
alias ora_asm_onlinelog_testdb_dest2
path_grouping_policy failover
}
multipath {
wwid 360a98000486e5463644a54414f4c6d57
alias ora_asm_spfile_testdb
}
}
Thank you again Neto !
user_friendly_names yes
This is not supported by SDU
Change to no (please comment the alias as well)
Restart multipath and sdu
snapdrive storage show -dev needs to show /dev/mapper/123455 (hash id)
Pls let me know
neto
OK neto thanks ! will try this out and will let you know... have a nice day!
The aliases set up at the bottom of the file may also be contributing. You may need to also remove the aliases and restart multipathing and then restart snapdrived.
example:
multipaths {
multipath {
wwid 360a98000486e5463644a536c43666d67
alias oraclehome
}
Yes thank you I will try this. Our configuration includes RAC, and it was configured with the user-friendly-names and the aliases. So it wont start without those. I have to drop RAC, change multipath.conf and reconfigure RAC, then test SMO again... will see how this goes. Thank you again
Hi Neto,
we changed multipath.conf and it works now. This is how it looks like :
defaults {
user_friendly_names no
max_fds max
queue_without_daemon no
}
blacklist {
wwid <DevId>
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^sda$"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
devices {
device {
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
features "1 queue_if_no_path"
hardware_handler "0"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
rr_min_io 128
path_checker directio
flush_on_last_del yes
}
}
Thank you very much for your help !
Hi my friend
This is neto from Brazil
How are you?
I'm very happy that is working now.
Please let me know if you need any other help.
Count on me for all.
All the best
neto
NetApp - I love this company!
Hi Neto,
we changed multipath.conf and it works now. This is how it looks like :
defaults {
user_friendly_names no
max_fds max
queue_without_daemon no
}
blacklist {
wwid <DevId>
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^sda$"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
devices {
device {
vendor "NETAPP"
product "LUN"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_ontap /dev/%n"
features "1 queue_if_no_path"
hardware_handler "0"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
rr_min_io 128
path_checker directio
flush_on_last_del yes
}
}
Thank you very much for your help !
Hi Marditsa,
This is neto from Brazil
How are you?
I'm very happy that is working now.
Please let me know if you need any other help.
Count on me for all.
All the best
neto
NetApp - I love this company!