Data Backup and Recovery

SDU - SUSE Linux in VM Error

mskubski
2,964 Views

Hello,

we have following Problem with a SUSE VM and SD Linux:

Snapdrive Daemon start time : Tue Oct 25 15:05:01 2011

Total Commands Executed     : 0

Job Status:

        No command in execution

15:05:05 10/25/11 v,10,0,Job[snapdrived]::status_all: status Snapdrive Daemon Version    : 4.2D7  (Change 1409852 Built Wed Jul 20 00:10:46 PDT 2011)

Snapdrive Daemon start time : Tue Oct 25 15:05:01 2011

Total Commands Executed     : 0

Job Status:

        No command in execution

15:05:05 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUDaemonStatus: 0 Snapdrive Daemon Version    : 4.2D7  (Change 1409852 Built Wed Jul 20 00:10:46 PDT 2011)

Snapdrive Daemon start time : Tue Oct 25 15:05:01 2011

Total Commands Executed     : 0

Job Status:

        No command in execution

15:05:05 10/25/11 v,10,1,snapdrived:process_request(): exit

15:05:09 10/25/11 v,10,1,snapdrived:main(): make soap copy

15:05:09 10/25/11 v,10,1,snapdrived:main(): init thread

15:05:09 10/25/11 v,10,1,snapdrived:main(): create thread

15:05:09 10/25/11 v,10,1,snapdrived:main(): pthread_detach done with status: 0

15:05:09 10/25/11 v,10,1,snapdrived:process_request(): started

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandExecute: command to execute = snapdrive storage list -all

15:05:09 10/25/11 v,10,1,snapdrived :authenticate : start

15:05:09 10/25/11 F,10,1,snapdrived :authenticate: authentication done for root

15:05:09 10/25/11 v,10,1,snapdrived :authenticate : exit ret = 0

15:05:09 10/25/11 v,10,1,snapdrived:build_legacy_command_info(): started

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): noprompt 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): version 27

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): force 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): full 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): quiet 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): verbose 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): dfm 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): viadmin 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): mgmtpath 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): migratepath 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): shrink 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): addlun 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): autoexpand 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): autorename 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): all 1

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): devices 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): cli_debug 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): reserve 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): noreserve 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): persist 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): nopersist 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): nolvm 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): nofilerfence 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): readonly 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): unrelated 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): capabilities 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): onenode 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): status 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): split 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): dgsize 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): delta 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): lunsize 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): count 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): hba 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): targetid 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): pw_uid 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): pw_gid 0

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): userid root

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): cmd_line snapdrive storage list -all

15:05:09 10/25/11 v,10,1,build_legacy_command_info(): cwd /opt/NetApp/snapdrive

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandExecute: JOB_INPUT_REQUIRE_SOAP | JOB_INPUT_CLI_REQUEST

15:05:09 10/25/11 v,10,0,Job::Job: construct job

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::Job: init thread condition var

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::Job: init lock job mutex begin

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::Job: lock job queue begin

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::Job: unlock job queue success

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::Job: created, total cmds: 1

15:05:09 10/25/11 v,10,0,Job::execute: entered

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: start

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: queue lock begin

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: preparing for fork

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: completed post fork processing

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: in execution

Job[H6Nx2CCTjz]::execute: Trying to bind

Job[H6Nx2CCTjz]::execute: Bind successful port:16000

Job[H6Nx2CCTjz]::execute: job Queue unlock success

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: child pid: 6191

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: JOB_INPUT_ASYNC_SERVER_REQUEST

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: completed post fork processing

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandExecute: success H6Nx2CCTjz No Error 0

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: in execution

Job[H6Nx2CCTjz]::execute: Trying to bind

Job[H6Nx2CCTjz]::execute: Bind successful port:16000

Job[H6Nx2CCTjz]::execute: job Queue unlock success

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: start of child 

15:05:09 10/25/11 v,10,1,snapdrived:process_request(): exit

15:05:09 10/25/11 v,10,1,snapdrived:main(): make soap copy

15:05:09 10/25/11 v,10,1,snapdrived:main(): init thread

15:05:09 10/25/11 v,10,1,snapdrived:main(): create thread

15:05:09 10/25/11 v,10,1,snapdrived:main(): pthread_detach done with status: 0

15:05:09 10/25/11 v,10,1,snapdrived:process_request(): started

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: stared

15:05:09 10/25/11 v,10,1,snapdrived :authenticate : start

15:05:09 10/25/11 F,10,1,snapdrived :authenticate: authentication done for root

15:05:09 10/25/11 v,10,1,snapdrived :authenticate : exit ret = 0

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: find started

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: lock job queue begin

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: unlock job queue success

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: found job

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: rcved status request for job H6Nx2CCTjz

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Entered

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: start of child accept init thread

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: start of child accept create thread

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: start cmdType 8

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request started

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: find started

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: lock job queue begin

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request make safe soap copy

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: unlock job queue success

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: found job

15:05:09 10/25/11 F,10,0,Job::executeScaleableThread started

15:05:09 10/25/11 F,10,0,Job::executeScaleableThread created thread id :-170722448

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::child_process_soap_request started

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: stared

15:05:09 10/25/11 v,10,1,snapdrived :authenticate : start

15:05:09 10/25/11 F,10,1,snapdrived :authenticate: authentication done for root

15:05:09 10/25/11 v,10,1,snapdrived :authenticate : exit ret = 0

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: find started

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: lock job queue begin

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: unlock job queue success

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: found job

15:05:09 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: rcved status request for job H6Nx2CCTjz

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Entered

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: soap_call__SDUCLI__SDUCommandStatus rcvd by child job H6Nx2CCTjz

Job[H6Nx2CCTjz]::status: statusCode: 1 errorCode: 10 message:

error message:

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Exit job

15:05:09 10/25/11 v,10,0,snapdrived:_SDUCLI__SDUCommandStatus: exit errorcode:0 ,statuscode: 1 ,output: 

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz::status: got user context

Job[H6Nx2CCTjz]::status: statusCode: 1 errorCode: 10 message:

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::child_process_soap_request exit

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Exit job

15:05:09 10/25/11 v,10,0,snapdrived:_SDUCLI__SDUCommandStatus: exit errorcode:0 ,statuscode: 1 ,output: 

15:05:09 10/25/11 v,10,1,snapdrived:process_request(): exit

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::error: Exit message:

0001-185 Command error: storage show failed: no NETAPP devices to show or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use http for storage system communication and restarting snapdrive daemon.

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: failed

15:05:09 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: pthread join

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request soap TCP error

15:05:10 10/25/11 v,10,1,snapdrived:main(): make soap copy

15:05:10 10/25/11 v,10,1,snapdrived:main(): init thread

15:05:10 10/25/11 v,10,1,snapdrived:main(): create thread

15:05:10 10/25/11 v,10,1,snapdrived:main(): pthread_detach done with status: 0

15:05:10 10/25/11 v,10,1,snapdrived:process_request(): started

15:05:10 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: stared

15:05:10 10/25/11 v,10,1,snapdrived :authenticate : start

15:05:10 10/25/11 F,10,1,snapdrived :authenticate: authentication done for root

15:05:10 10/25/11 v,10,1,snapdrived :authenticate : exit ret = 0

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: find started

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: lock job queue begin

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: unlock job queue success

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: found job

15:05:10 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: rcved status request for job H6Nx2CCTjz

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Entered

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request make safe soap copy

15:05:10 10/25/11 F,10,0,Job::executeScaleableThread started

15:05:10 10/25/11 F,10,0,Job::executeScaleableThread created thread id :-170722448

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::child_process_soap_request started

15:05:10 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: stared

15:05:10 10/25/11 v,10,1,snapdrived :authenticate : start

15:05:10 10/25/11 F,10,1,snapdrived :authenticate: authentication done for root

15:05:10 10/25/11 v,10,1,snapdrived :authenticate : exit ret = 0

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: find started

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: lock job queue begin

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: unlock job queue success

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::find: found job

15:05:10 10/25/11 v,10,0,snapdrived:__SDUCLI__SDUCommandStatus: rcved status request for job H6Nx2CCTjz

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Entered

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: soap_call__SDUCLI__SDUCommandStatus rcvd by child job H6Nx2CCTjz

Job[H6Nx2CCTjz]::status: statusCode: 3 errorCode: 3206006 message:

error message: 0001-185 Command error: storage show failed: no NETAPP devices to show or add the host to the trusted hosts (options trusted.hosts) and enable SSL on the storage system or retry after changing snapdrive.conf to use http for storage system communication and restarting snapdrive daemon.

Job[H6Nx2CCTjz]::status: soap_call__SDUCLI__SDUCommandStatus child job completeH6Nx2CCTjz

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Exit job

15:05:10 10/25/11 v,10,0,snapdrived:_SDUCLI__SDUCommandStatus: exit errorcode:206006 ,statuscode: 3 ,output: 

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::child_process_soap_request exit

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request make safe soap copy

15:05:10 10/25/11 F,10,0,Job::executeScaleableThread started

15:05:10 10/25/11 F,10,0,Job::executeScaleableThread created thread id :-170722448

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::child_process_soap_request started

15:05:10 10/25/11 w,10,0,snapdrived:__SDUCLI__SDUDaemonStop[Child]: rcved  daemon stop request (force)

15:05:10 10/25/11 v,10,1,snapdrived :authenticate : start

15:05:10 10/25/11 F,10,1,snapdrived :authenticate: authentication done for root

15:05:10 10/25/11 v,10,1,snapdrived :authenticate : exit ret = 0

15:05:10 10/25/11 w,10,0,snapdrived:__SDUCLI__SDUDaemonStop: daemonState = 3

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz::status: got user context

Job[H6Nx2CCTjz]::status: statusCode: 3 errorCode: 3206006 message:

Job[H6Nx2CCTjz]::status: Job completed calling stop

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::status: Exit job

15:05:10 10/25/11 v,10,0,snapdrived:_SDUCLI__SDUCommandStatus: exit errorcode:206006 ,statuscode: 3 ,output: 

15:05:10 10/25/11 v,10,0,Job[H6Nx2CCTjz]::child_process_soap_request exit

15:05:10 10/25/11 v,10,1,snapdrived:process_request(): exit

15:05:11 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request soap TCP error

15:05:11 10/25/11 v,10,0,Job[H6Nx2CCTjz]::accept_soap_request daemon stopping

15:05:11 10/25/11 v,10,0,Job[H6Nx2CCTjz]:accept_soap_request exit

15:05:11 10/25/11 v,10,0,Job[H6Nx2CCTjz]::execute: after pthread join

15:05:11 10/25/11 F,10,0,snapdrived:__SDUCLI__SDUCommandExecute: child process exit

15:05:13 10/25/11 v,10,0,Job[H6Nx2CCTjz]::~Job: destroying job

15:05:13 10/25/11 v,10,1,snapdrived:free_command_info(): started

15:05:13 10/25/11 v,10,1,snapdrived:free_command_info(): cmd elements freed

15:05:13 10/25/11 v,10,1,snapdrived:free_command_info(): success ret

15:05:13 10/25/11 v,10,0,Job[H6Nx2CCTjz]::~Job exit

Any Idea?

Thank you

2 REPLIES 2

arunacha
2,964 Views

SLES RDM support in there in the version which is currently used by this client...

From the error message it seems.. no storage provision has happened through SDU from this VM.

Please try to create a lun or fs with snapdrive and give this command..

or if you have any issues in the storage provisioning just post the sd-trace.log here...    

LINUX43110
2,964 Views

First make sure that sanlun can see all of your HBA's.

sanlun fcp show adapter -v

If that reports okay then make sure that you don't have iscsid running in the background with the various iscsi modules loaded.

service iscsid stop

Public