Data Backup and Recovery

Am I facing BURT 357965?

pierrek
Dear  all,
I am preparing a  SMO/SDU install on Linux RHEL 5.3 with FCP and dm-multipath. I have read and  taken into account the ext3 on raw partition limitations of SDU 4.1, resulting  in the following setup:
root@oralinux:~root@oralinux ~]#  df -k
Filesystem           1K-blocks      Used Available Use% Mounted  on
/dev/mapper/VolGroup00-LogVol00
                      71609640   35685896  32227516  53% /
/dev/hdb1               101086     20999     74868   22% /boot
tmpfs                  2097152         0   2097152   0%  /dev/shm
/dev/mapper/VolGroup01-LogVol01
                       10317112   1721932   8071100  18%  /u02
/dev/mapper/VolGroup02-LogVol02
                      10317112     182852   9610180   2%  /u03
/dev/mapper/VolGroup03-LogVol03
                      10317112     308024   9485008   4%  /u04
/dev/mapper/VolGroup04-LogVol04
                      10317112     456512   9336520   5%  /u05
/dev/mapper/VolGroup05-LogVol05
                      10317112     154248   9638784   2% /u06
/dev/mapper/oradata    8388608     269056   8119552   4% /u08
/dev/mapper/oralog     8388608    269056    8119552   4% /u09
/dev/mapper/oractl     8388608    269056   8119552   4%  /u10
/dev/mapper/recovery   8388608    269056   8119552   4%  /u11
The Oracle DB files  have been put on /u02 to /u06, and those filesystems were successfully created  using snapdrive itself.
The connectivity is  FCP (qla2342) and dm-multipath is in use:
root@oralinux:~root@oralinux ~]#  multipath -ll
mpath2 (360a9800056724776566f515a57616a65)  dm-2 NETAPP,LUN
[size=10G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:1  sdaa 65:160 [active][ready]
\_ 0:0:0:1  sdb  8:16    [active][ready]
asm6 (360a9800056724776566f51634e544749) dm-21  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:20 sdat 66:208  [active][ready]
\_ 0:0:0:20 sdu  65:64  [active][ready]
mpath1 (360a9800056724776566f515a5758397a) dm-26  NETAPP,LUN
[size=10G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 0:0:0:0  sda  8:0     [active][ready]
\_ 1:0:0:0  sdz  65:144 [active][ready]
asm5  (360a9800056724776566f51634e544349) dm-18 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:17 sdaq 66:160 [active][ready]
\_ 0:0:0:17 sdr  65:16   [active][ready]
asm4 (360a9800056724776566f51634e544464) dm-19  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:18 sdar 66:176  [active][ready]
\_ 0:0:0:18 sds  65:32  [active][ready]
asm10  (360a9800056724776566f51634e544d69) dm-25 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:24 sdax 67:16  [active][ready]
\_ 0:0:0:24 sdy  65:128  [active][ready]
asm3 (360a9800056724776566f51634e544579) dm-20  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:19 sdas 66:192  [active][ready]
\_ 0:0:0:19 sdt  65:48  [active][ready]
ocr2  (360a9800056724776566f5162762d6577) dm-11 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:10 sdaj 66:48  [active][ready]
\_ 0:0:0:10 sdk  8:160   [active][ready]
asm2 (360a9800056724776566f51634e544179) dm-17  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:16 sdap 66:144  [active][ready]
\_ 0:0:0:16 sdq  65:0   [active][ready]
ocr1  (360a9800056724776566f5162762d6b47) dm-15 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:14 sdan 66:112 [active][ready]
\_ 0:0:0:14 sdo  8:224   [active][ready]
voting3 (360a9800056724776566f5162762d6977) dm-14  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:13 sdam 66:96   [active][ready]
\_ 0:0:0:13 sdn  8:208  [active][ready]
asm1  (360a9800056724776566f51634e542d64) dm-16 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:15 sdao 66:128 [active][ready]
\_ 0:0:0:15 sdp  8:240   [active][ready]
oraclehome (360a9800056724776566f5162762d6461) dm-10  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:9  sdai 66:32   [active][ready]
\_ 0:0:0:9  sdj  8:144  [active][ready]
voting2  (360a9800056724776566f5162762d6861) dm-13 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:12 sdal 66:80  [active][ready]
\_ 0:0:0:12 sdm  8:192   [active][ready]
oradata (360a9800056724776566f5162762d5a43) dm-6  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:5  sdae 65:224  [active][ready]
\_ 0:0:0:5  sdf  8:80   [active][ready]
voting1  (360a9800056724776566f5162762d6747) dm-12 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:11 sdak 66:64  [active][ready]
\_ 0:0:0:11 sdl  8:176   [active][ready]
mpath5  (360a9800056724776566f515a576c6d6f) dm-5 NETAPP,LUN
[size=10G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:4  sdad 65:208 [active][ready]
\_ 0:0:0:4  sde  8:64    [active][ready]
asm9 (360a9800056724776566f51634e544b4a) dm-24  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:23 sdaw 67:0    [active][ready]
\_ 0:0:0:23 sdx  65:112 [active][ready]
oralog  (360a9800056724776566f5162762d2f2f) dm-7 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:6  sdaf 65:240 [active][ready]
\_ 0:0:0:6  sdg  8:96    [active][ready]
mpath4  (360a9800056724776566f515a57696a57) dm-4 NETAPP,LUN
[size=10G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:3  sdac 65:192 [active][ready]
\_ 0:0:0:3  sdd  8:48    [active][ready]
asm8 (360a9800056724776566f51634e54497a) dm-23  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:22 sdav 66:240  [active][ready]
\_ 0:0:0:22 sdw  65:96  [active][ready]
recovery  (360a9800056724776566f5162762d6347) dm-9 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:8  sdah 66:16  [active][ready]
\_ 0:0:0:8  sdi  8:128   [active][ready]
mpath3  (360a9800056724776566f515a57635a46) dm-3 NETAPP,LUN
[size=10G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:2  sdab 65:176 [active][ready]
\_ 0:0:0:2  sdc  8:32    [active][ready]
asm7 (360a9800056724776566f51634e544864) dm-22  NETAPP,LUN
[size=8.0G][features=1 queue_if_no_path][hwhandler=0][rw]
\_  round-robin 0 [prio=8][active]
\_ 1:0:0:21 sdau 66:224  [active][ready]
\_ 0:0:0:21 sdv  65:80  [active][ready]
oractl  (360a9800056724776566f5162762d6177) dm-8 NETAPP,LUN
[size=8.0G][features=1  queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
\_  1:0:0:7  sdag 66:0   [active][ready]
\_ 0:0:0:7  sdh  8:112   [active][ready]
Device static  mapping is made in /etc/multipath.conf:
multipaths {

multipath {

wwid  360a9800056724776566f515a5758397a

alias mpath1

}

...
So all seems ok, and  when using snapdrive to show the devices, I have the following  output:
root@oralinux:/dev/mapper/root@oralinux mapper]# snapdrive storage show -dev
Connected LUNs and devices:
device filename    adapter path size proto state clone       lun path   backing snapshot
----------------     ------- ---- ---- ----- ----- -----      --------   ----------------
/dev/mpath/mpath2         -   P     10g      fcp online No         fas920:/vol/oracle2/qtree1/lun1.lun    -   
/dev/mpath/asm6           -   P     8g      fcp online No          fas920:/vol/asm_on_sata_6/qtree/lun6.lun    -  
/dev/mpath/mpath1         -   P     10g     fcp online No          fas920:/vol/oracle1/qtree1/lun1.lun    -   
/dev/mpath/asm5           -   P     8g      fcp online No          fas920:/vol/asm_on_sata_3/qtree/lun3.lun    -  
/dev/mpath/asm4           -    P     8g      fcp online No         fas920:/vol/asm_on_sata_4/qtree/lun4.lun     -  
/dev/mpath/asm10          -   P     8g      fcp online No          fas920:/vol/asm_on_sata_10/qtree/lun10.lun    -  
/dev/mpath/asm3            -   P     8g      fcp online No          fas920:/vol/asm_on_sata_5/qtree/lun5.lun    -  
/dev/mpath/ocr2           -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_6/lun6.lun     -  
/dev/mpath/asm2           -   P     8g      fcp online No          fas920:/vol/asm_on_sata_2/qtree/lun2.lun    -  
/dev/mpath/ocr1           -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_10/lun10.lun     -  
/dev/mpath/voting3        -   P     8g      fcp online No          fas920:/vol/oracle_on_sata_9/lun9.lun    -  
/dev/mpath/asm1           -    P     8g      fcp online No         fas920:/vol/asm_on_sata_1/qtree/lun1.lun     -  
/dev/mpath/oraclehome     -   P     8g      fcp online No          fas920:/vol/oracle_on_sata_5/lun5.lun    -  
/dev/mpath/voting2        -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_8/lun8.lun     -  
/dev/mpath/oradata        -   P     8g      fcp online No          fas920:/vol/oracle_on_sata_1/lun1.lun    -  
/dev/mpath/voting1        -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_7/lun7.lun     -  
/dev/mpath/mpath5         -   P     10g      fcp online No         fas920:/vol/oracle5/qtree1/lun1.lun    -   
/dev/mpath/asm9           -   P     8g      fcp online No          fas920:/vol/asm_on_sata_9/qtree/lun9.lun    -  
/dev/mpath/oralog         -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_2/lun2.lun     -  
/dev/mpath/mpath4         -   P     10g      fcp online No         fas920:/vol/oracle4/qtree1/lun1.lun    -   
/dev/mpath/asm8           -   P     8g      fcp online No          fas920:/vol/asm_on_sata_8/qtree/lun8.lun    -  
/dev/mpath/recovery       -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_4/lun4.lun     -  
/dev/mpath/mpath3         -   P     10g      fcp online No         fas920:/vol/oracle3/qtree1/lun1.lun    -   
/dev/mpath/asm7           -   P     8g      fcp online No          fas920:/vol/asm_on_sata_7/qtree/lun7.lun    -  
/dev/mpath/oractl         -    P     8g      fcp online No         fas920:/vol/oracle_on_sata_3/lun3.lun     -  
But when launching  SMO to create a profile, the command hangs forever waiting for snapdrive to  return on the following command:
snapdrive storage show -fs /u02
Also, I launched  manually, I have the same behaviour, the command never returns. The same is true  if launching
snapdrive storage show  -all
Looking at the  sd-trace.log for this command, it seems to execute correctly and return (I  think) the expected result, except that there is this strange message  (leaked on vgdisplay invocation.):
13:06:25 07/16/09 [2300b90]?,2,2,Job tag:  Bzxl8vU6rY
13:06:25 07/16/09 [2300b90]?,2,2,snapdrive  storage show -fs /u02
13:06:25 07/16/09  [2300b90]v,2,6,FileSpecOperation::FileSpecOperation: 8
13:06:25 07/16/09  [2300b90]v,2,6,StorageOperation::StorageOperation: 8
13:06:25 07/16/09  [2300b90]i,2,2,Job tag Bzxl8vU6rY
13:06:25 07/16/09  [2300b90]i,2,6,Operation::setUserCred user id from soap context:  root
13:06:25 07/16/09 [2300b90]i,2,6,Operation::setUserCred uid:0 gid:0  userName:root
13:06:25 07/16/09 [2300b90]v,2,6,Operation::getUserCred uid:0  gid:0 userName:root
13:06:25 07/16/09  [2300b90]v,2,6,Operation::isNonRootAllowed Exit ret:1
13:06:25 07/16/09  [2300b90]v,2,6,FileSpecOperation::init: starting
13:06:25 07/16/09  [2300b90]i,2,6,Operation::initParallelOps: started
13:06:25 07/16/09  [2300b90]i,2,6,Operation::initParallelOps: succeeded
13:06:25 07/16/09  [2300b90]v,2,6,StorageStack::StorageStack
13:06:25 07/16/09  [2300b90]v,2,6,StorageStack::init Transport type selected: fcp
13:06:25  07/16/09 [2300b90]i,2,6,g_vmtype:linuxlvm, g_fstype:ext3,  g_mptype:linuxmpio.
13:06:25 07/16/09 [2300b90]i,2,6,Adding vmtype:linuxlvm  and mptype:linuxmpio to g_mptype
13:06:25 07/16/09 [2300b90]i,2,6,Adding  vmtype: and mptype:linuxmpio to g_mptype
13:06:25 07/16/09  [2300b90]v,2,6,StorageStack::~StorageStack
13:06:25 07/16/09  [2300b90]i,2,6,FileSpecOperation::initFileSpecList: create 1 filespec 
13:06:25 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::initScaleableExecutionPort: successful 
13:06:25 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::startScaleableExecution: successful 
13:06:25 07/16/09 [2300b90]d,2,5,SoftPointer:: FileSystem:/u02
13:06:25  07/16/09 [2300b90]d,2,4,Memorizable:: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon:: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,8,FileSpec::FileSpec: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,10,FileSystem::FileSystem: /u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon::initialize:   FileSystem: /u02
13:06:25 07/16/09  [2300b90]v,2,3,SblDataBank::SblDataBank
13:06:25 07/16/09  [2300b90]v,2,10,FileSystem::init: /u02 - type: ext3 mount options:  rw
13:06:25 07/16/09 [2300b90]v,2,10,FileSystem::isTypeSupported: /u02 file  system assistant ext3 (type: 'ext3')
13:06:25 07/16/09  [2300b90]i,2,10,FileSystem::getPersistentMountStatusFromMountEntry:  PersistentlyMounted = yes
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon::initialize: memorize FileSystem: /u02
13:06:25  07/16/09 [2300b90]d,2,3,SblDataBank::save: object FileSystem:/u02 has been  memorized
13:06:25 07/16/09 [2300b90]v,2,6,Operation::saveErrorReportList:  saved 0 ErrorReports
13:06:25 07/16/09  [2300b90]v,2,6,ErrorReport::cleanErrorReportList: 0 Error Report  objects
13:06:25 07/16/09 [2300b90]v,2,6,Operation::restoreErrorReportList:  (0) restored 0, skipped: 0
13:06:25 07/16/09 [2300b90]d,2,5,SoftPointer::  FileSystem:/u02
13:06:25 07/16/09 [2300b90]i,2,6,Operation::saveObjectName:  saved FileSystem:/u02
13:06:25 07/16/09 [2300b90]d,2,5,SblCommon::initialize:  FileSystem: /u02 (1) +
13:06:25 07/16/09 [2300b90]v,2,10,FileSystem::acquire:  /u02
13:06:25 07/16/09 [2300b90]d,2,5,SoftPointer::~SoftPointer:  FileSystem:/u02
13:06:25 07/16/09  [2300b90]v,2,6,FileSpecOperation::createFileSpec: /u02 does exist, notexisting:  0
13:06:25 07/16/09 [2300b90]d,2,5,SoftPointer:: FileSystem:/u02
13:06:25  07/16/09 [2300b90]v,2,10,FileSystem::release: /u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon::deinitialize: FileSystem: /u02 (ref=0)
13:06:25  07/16/09 [2300b90]v,2,6,FileSpecOperation::initFileSpecList: /u02
13:06:25  07/16/09 [2300b90]v,2,6,FileSpecOperation::initFileSpecList: success
13:06:25  07/16/09 [2300b90]d,2,4,Memorizable:: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon:: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,8,FileSpec::FileSpec: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,10,FileSystem::FileSystem: /u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon::initialize:   FileSystem: /u02
13:06:25 07/16/09  [2300b90]d,2,3,SblDataBank::find: found FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,17,SblCommon::initialize: /u02 memorized already
13:06:25  07/16/09 [2300b90]d,2,3,SblDataBank::find: found FileSystem:/u02
13:06:25  07/16/09 [2300b90]d,2,10,FileSystem::~FileSystem: /u02
13:06:25 07/16/09  [2300b90]d,2,8,FilerSpec::~FileSpec FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon::~SblCommon: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,4,Memorizable::~Memorizable: FileSystem:/u02
13:06:25 07/16/09  [2300b90]d,2,5,SblCommon::initialize: FileSystem: /u02 (1) +
13:06:25  07/16/09 [2300b90]v,2,10,FileSystem::acquire: /u02
13:06:26 07/16/09  [2300b90]d,2,5,SoftPointer::  HostVolume:/dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]d,2,4,Memorizable::  HostVolume:/dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]d,2,5,SblCommon::  HostVolume:/dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]d,2,8,FileSpec::FileSpec:  HostVolume:/dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]d,2,11,HostVolume::HostVolume: /dev/mapper/VolGroup01-LogVol01  "linuxlvm"
13:06:26 07/16/09 [2300b90]d,2,5,SblCommon::initialize:    HostVolume: /dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]v,9,0,ASSISTANT EXECUTION (at 141.799183): ls  /dev/VolGroup01/LogVol01
13:06:26 07/16/09 [2300b90]v,9,0,ASSISTANT EXECUTION  (at 141.805393): Output:
/dev/VolGroup01/LogVol01
13:06:26 07/16/09  [2300b90]d,2,5,SblCommon::initialize: memorize HostVolume:  /dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]d,2,3,SblDataBank::save: object  HostVolume:/dev/mapper/VolGroup01-LogVol01 has been memorized
13:06:26  07/16/09 [2300b90]v,2,6,Operation::saveErrorReportList: saved 0  ErrorReports
13:06:26 07/16/09  [2300b90]v,2,6,ErrorReport::cleanErrorReportList: 0 Error Report  objects
13:06:26 07/16/09 [2300b90]v,2,6,Operation::restoreErrorReportList:  (0) restored 0, skipped: 0
13:06:26 07/16/09 [2300b90]d,2,5,SoftPointer::  HostVolume:/dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]i,2,6,Operation::saveObjectName: saved  HostVolume:/dev/mapper/VolGroup01-LogVol01
13:06:26 07/16/09  [2300b90]d,2,5,SblCommon::initialize: HostVolume:  /dev/mapper/VolGroup01-LogVol01 (1) +
13:06:26 07/16/09  [2300b90]v,2,11,HostVolume::acquire: /dev/mapper/VolGroup01-LogVol01
13:06:26  07/16/09 [2300b90]d,2,5,SoftPointer::  DiskGroup:/dev/mapper/VolGroup01
13:06:26 07/16/09  [2300b90]d,2,4,Memorizable:: DiskGroup:/dev/mapper/VolGroup01
13:06:26  07/16/09 [2300b90]d,2,5,SblCommon:: DiskGroup:/dev/mapper/VolGroup01
13:06:26  07/16/09 [2300b90]d,2,8,FileSpec::FileSpec:  DiskGroup:/dev/mapper/VolGroup01
13:06:26 07/16/09  [2300b90]d,2,12,DiskGroup::DiskGroup: /dev/mapper/VolGroup01  "linuxlvm"
13:06:26 07/16/09 [2300b90]d,2,5,SblCommon::initialize:    DiskGroup: /dev/mapper/VolGroup01
13:06:26 07/16/09 [2300b90]v,9,0,ASSISTANT  EXECUTION (at 141.806899): vgdisplay VolGroup01
13:06:26 07/16/09  [2300b90]v,9,0,ASSISTANT EXECUTION (at 141.888522): Output:
File descriptor 3 (/var/log/sd-daemon-trace.log) leaked on  vgdisplay invocation. Parent PID 15104: /opt/NetApp/snapdrive/bin/snapd
File  descriptor 4 (/opt/NetApp/snapdrive/.snapdrived.pid) leaked on vgdisplay  invocation. Parent PID 15104: /opt/NetApp/snapdrive/bin/snapd
File descriptor  5 (/opt/NetApp/snapdrive/.snapdrived.pid) leaked on vgdisplay invocation. Parent  PID 15104: /opt/NetApp/snapdrive/bin/snapd
File descriptor 6  (/var/log/sd-trace.log) leaked on vgdisplay invocation. Parent PID 15104:  /opt/NetApp/snapdrive/bin/snapd
File descriptor 9 (/etc/hba.conf) leaked on  vgdisplay invocation. Parent PID 15104: /opt/NetApp/snapdrive/bin/snapd
File  descriptor 10 (/tmp/qlsdm.dat) leaked on vgdisplay invocation. Parent PID 15104:  /opt/NetApp/snapdrive/bin/snapd
File descriptor 11 (/var/log/sd-audit.log)  leaked on vgdisplay invocation. Parent PID 15104:  /opt/NetApp/snapdrive/bin/snapd
  --- Volume group ---
  VG  Name               VolGroup01
  System ID            
   Format                lvm2
  Metadata Areas        1
  Metadata Sequence  No  2
  VG Access             read/write
  VG Status              resizable
  MAX LV                0
  Cur LV                1
  Open  LV               1
  Max PV                0
  Cur PV                 1
  Act PV                1
  VG Size               10.00 GB
  PE  Size               4.00 MB
  Total PE              2559
  Alloc PE /  Size       2559 / 10.00 GB
  Free  PE / Size       0 / 0  
  VG  UUID               NbvPH2-Xsjd-3p80-cxk8-HyRv-eL3k-gZPgoE
  
13:06:26  07/16/09 [2300b90]d,2,5,SblCommon::initialize: memorize DiskGroup:  /dev/mapper/VolGroup01
13:06:26 07/16/09 [2300b90]d,2,3,SblDataBank::save:  object DiskGroup:/dev/mapper/VolGroup01 has been memorized
13:06:26 07/16/09  [2300b90]v,2,6,Operation::saveErrorReportList: saved 0 ErrorReports
13:06:26  07/16/09 [2300b90]v,2,6,ErrorReport::cleanErrorReportList: 0 Error Report  objects
13:06:26 07/16/09 [2300b90]v,2,6,Operation::restoreErrorReportList:  (0) restored 0, skipped: 0
13:06:26 07/16/09 [2300b90]d,2,5,SoftPointer::  DiskGroup:/dev/mapper/VolGroup01
13:06:26 07/16/09  [2300b90]i,2,6,Operation::saveObjectName: saved  DiskGroup:/dev/mapper/VolGroup01
13:06:26 07/16/09  [2300b90]d,2,5,SblCommon::initialize: DiskGroup: /dev/mapper/VolGroup01 (1)  +
13:06:26 07/16/09 [2300b90]v,2,12,DiskGroup::acquire:  dgCname=/dev/mapper/VolGroup01
13:06:26 07/16/09  [2300b90]v,2,12,DiskGroup::translate: /dev/mapper/VolGroup01
13:06:26  07/16/09 [2300b90]v,2,12,DiskGroup::clean: /dev/mapper/VolGroup01 removed md: 0,  hostvol: 0, pd: 0
13:06:26 07/16/09 [2300b90]v,9,0,ASSISTANT EXECUTION (at  141.889783): vgdisplay -D -v VolGroup01
13:06:26 07/16/09  [2300b90]v,9,0,ASSISTANT EXECUTION (at 141.953977): Output:
File descriptor 3 (/var/log/sd-daemon-trace.log) leaked on  vgdisplay invocation. Parent PID 15104: /opt/NetApp/snapdrive/bin/snapd
File  descriptor 4 (/opt/NetApp/snapdrive/.snapdrived.pid) leaked on vgdisplay  invocation. Parent PID 15104: /opt/NetApp/snapdrive/bin/snapd
File descriptor  5 (/opt/NetApp/snapdrive/.snapdrived.pid) leaked on vgdisplay invocation. Parent  PID 15104: /opt/NetApp/snapdrive/bin/snapd
File descriptor 6  (/var/log/sd-trace.log) leaked on vgdisplay invocation. Parent PID 15104:  /opt/NetApp/snapdrive/bin/snapd
File descriptor 9 (/etc/hba.conf) leaked on  vgdisplay invocation. Parent PID 15104: /opt/NetApp/snapdrive/bin/snapd
File  descriptor 10 (/tmp/qlsdm.dat) leaked on vgdisplay invocation. Parent PID 15104:  /opt/NetApp/snapdrive/bin/snapd
File descriptor 11 (/var/log/sd-audit.log)  leaked on vgdisplay invocation. Parent PID 15104:  /opt/NetApp/snapdrive/bin/snapd
    Using volume group(s) on command  line
    Finding volume group "VolGroup01"
  --- Volume group ---
  VG  Name               VolGroup01
  System ID            
   Format                lvm2
  Metadata Areas        1
  Metadata Sequence  No  2
  VG Access             read/write
  VG Status              resizable
  MAX LV                0
  Cur LV                1
  Open  LV               1
  Max PV                0
  Cur PV                 1
  Act PV                1
  VG Size               10.00 GB
  PE  Size               4.00 MB
  Total PE              2559
  Alloc PE /  Size       2559 / 10.00 GB
  Free  PE / Size       0 / 0  
  VG  UUID               NbvPH2-Xsjd-3p80-cxk8-HyRv-eL3k-gZPgoE
  
  ---  Logical volume ---
  LV Name                /dev/VolGroup01/LogVol01
  VG  Name                VolGroup01
  LV UUID                 ca3cLu-L3IN-k5nm-zPfm-XQNW-YJMm-lHhd2B
  LV Write Access         read/write
  LV Status              available
  # open                  1
  LV Size                10.00 GB
  Current LE             2559
   Segments               1
  Allocation             inherit
  Read ahead  sectors     auto
  - currently set to     256
  Block device            253:27
  
  --- Physical volumes ---
  PV Name                /dev/mpath/mpath1    
  PV UUID                f7AkJM-TKqD-u5sj-9FoD-7A2Z-mI0E-u0H5g0
  PV Status              allocatable
  Total PE / Free PE    2559 / 0
  
13:06:26 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::initScaleableExecutionPort: successful 
13:06:26 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::startScaleableExecution: successful 
13:06:27 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::initScaleableExecutionPort: successful 
13:06:27 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::startScaleableExecution: successful 
13:06:28 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::initScaleableExecutionPort: successful 
13:06:28 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::startScaleableExecution: successful 
13:06:29 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::initScaleableExecutionPort: successful 
13:06:29 07/16/09  [343cb90]d,2,34,ScaleableExecutionPort::startScaleableExecution: successful
The only related  BURT I found on burtweb is BURT 357965 that seems to be fixed  in SDU 4.1.1.
Is there a way to  confirm I am facing this specific BURT? How can I download SDU 4.1.1 to verify  my setup works with the latest version?
Thanks in advance  for your help.
Pierre
2 REPLIES 2

pierrek

There is a workaround for this problem, using SDU 4.1:

-> in /etc/multipath.conf, remove all multipaths entries (the ones you use to set aliases on device names)

-> move all those aliases to the file /var/lib/multipath/bindings

-> leave the use_friendly_names to yes

reboot

Otherwise, SDU 4.1.1 officially supports dm-multipath.

Pierre

nagendrk

Thanks for the workaround Pierre..  But is this configuration supported?

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public