Network and Storage Protocols

netapp ndmp backup via networker 12-14MB/s

dpeverley
9,338 Views

Hi there we have a quantum i80 with 2 tape drives attached via fibre to our netapp cluster (FAS3140 on 7.3.2)

We intially could not get the netapp to see the tape drives and media changer when we directly attached the fibres to the tape drives so we opted to use a couple of ports on our fibre switches and a it has been correctly detected since then.

We are backing up using NDMP using Networker 7.6.2 but I haven't seen speeds anything above 12-14MB/s when backing up cifs shares containing user data home folders profiles etc.

NDMP settings are as follows

ndmpd version

ndmpd highest version set to: 4

options ndmpd

ndmpd.access                 all

ndmpd.authtype               challenge

ndmpd.connectlog.enabled     on

ndmpd.enable                 on

ndmpd.ignore_ctime.enabled   off

ndmpd.offset_map.enable      on

ndmpd.password_length        8

ndmpd.preferred_interface    disable    (value might be overwritten in takeover)

ndmpd.tcpnodelay.enable      on

The networker setting are as follows

nsrndmp_save -T dump

HIST=Y

UPDATE=Y

DIRECT=Y

EXTRACT_ACL=Y

NDMP_AUTO_BLOCK_SIZE=Y

What I would like to know is this as goot as the backup speed gets? Is there anthing I can do to speed it up?

Not really an expert with networker but I've tried to use 'nsrndmp_save -M -T dump' but from what I can tell this uses the backup server to do the backup so maybe expects the tape drives to be attcahed there, it didn't work with my testing.

Any help anyone can give would be much apreciated.

Thanks in advance

1 ACCEPTED SOLUTION

dpeverley
7,021 Views

Hi There thanks for ecveryones help on this one,

Managed to get this working there were a couple of changes that were required to address the 2 separate issues.

Can't directly attach the the tape drives to the netapp - this was resolved by setting the topology for the tape drives on the i80 (webconsole>Setup>Drive Settings) console to 'Loop (L)' rebooted the i80 and directly attched to the nas picked up correctly.

Slow performance with the backups - set networker to use 256KB block size for each drive and also disabled the HP network teaming on the server, it was set to aggregate 2x 1gb links into 1 when I set this to be a redundant failover pair the backup speeds and failed backups were all resolved.

backup speed has been good with no failures since.

View solution in original post

11 REPLIES 11

aborzenkov
9,302 Views

It is too low speed, but there could be many factors contributing. For a start, try to estimate theoretically possible backup speed by dumping to null (use the same volume/directory you try to backup normally): https://kb.netapp.com/support/index?page=content&id=1011894. Watch how long each phase of dump takes.

dpeverley
9,302 Views

Here is the backup log showing a dump to null and a dump to tape via networker

df -h BACKUP_TESTING

Filesystem               total       used      avail capacity  Mounted on

/vol/BACKUP_TESTING/        40GB       11GB       28GB      29%  /vol/BACKUP_TESTING/

/vol/BACKUP_TESTING/.snapshot       10GB       19GB        0KB     191%  /vol/BACKUP_TESTING/.snapshot

dump to null takes 0:00:57

dmp Wed Aug 17 10:52:26 BST [uninitialized](0) Log_msg (creating "/vol/BACKUP_TESTING/../snapshot_for_backup.7278" snapshot.)

dmp Wed Aug 17 10:52:28 BST /vol/BACKUP_TESTING/(0) Log_msg (Using Full Volume Dump )

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Start (Level 0)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Options (b=63)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Snapshot (snapshot_for_backup.7278, Wed Aug 17 10:52:26 BST)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Dumping tape file 1 on null)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Tape_open (null)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Date of this level 0 dump: Wed Aug 17 10:52:26 2011.)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Date of last level 0 dump: the epoch.)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Dumping /vol/BACKUP_TESTING to null)

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Log_msg (mapping (Pass I)[regular files] )

dmp Wed Aug 17 10:52:34 BST /vol/BACKUP_TESTING/(0) Phase_change (I)

dmp Wed Aug 17 10:52:39 BST /vol/BACKUP_TESTING/(0) Log_msg (mapping (Pass II)[directories])

dmp Wed Aug 17 10:52:39 BST /vol/BACKUP_TESTING/(0) Phase_change (II)

dmp Wed Aug 17 10:52:42 BST /vol/BACKUP_TESTING/(0) Log_msg (estimated 3370608 KB.)

dmp Wed Aug 17 10:52:42 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass III) [directories])

dmp Wed Aug 17 10:52:42 BST /vol/BACKUP_TESTING/(0) Phase_change (III)

dmp Wed Aug 17 10:52:44 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass IV) [regular files])

dmp Wed Aug 17 10:52:44 BST /vol/BACKUP_TESTING/(0) Phase_change (IV)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass V) [ACLs])

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Phase_change (V)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Tape_close (null)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (3373119 KB)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) End (3294 MB)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (DUMP IS DONE)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (reg inodes: 649 other inodes: 0 dirs: 111 nt dirs: 182 nt inodes: 184 acls: 47)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 1 time: 5276)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: directories dumped: 294)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: wafl directory blocks read: 297)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: average wafl directory blocks per inode: 1)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: average tape blocks per inode: 2)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3 throughput (MB sec): read 1 write 0)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase3 time spent for: reading inos 0% dumping ino 24%)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase3 dump time spent for: convert-wafl-dirs 22% lev0-ra 0%)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3 averages (usec): wafl load buf time 760 level 0 ra time 10)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: inodes dumped: 833)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: wafl data blocks read: 840844)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: average wafl data blocks per inode: 1009)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: average tape data blocks per inode: 4035)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 94 write 94)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase4 time spent for: reading inos 0% dumping inos 98%)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Tape write times (msec): average: 0 max: 9)

dmp Wed Aug 17 10:53:21 BST /vol/BACKUP_TESTING/(0) Log_msg (Tape changes: 1)

dmp Wed Aug 17 10:53:23 BST /vol/BACKUP_TESTING/(0) Log_msg (Deleting "/vol/BACKUP_TESTING/../snapshot_for_backup.7278" snapshot.)

Dump to tape via networker takes 0:05:10

dmp Wed Aug 17 10:59:28 BST [uninitialized](0) Log_msg (creating "/vol/BACKUP_TESTING/../snapshot_for_backup.7279" snapshot.)

dmp Wed Aug 17 10:59:31 BST /vol/BACKUP_TESTING/(0) Log_msg (Using Full Volume Dump )

dmp Wed Aug 17 10:59:35 BST /vol/BACKUP_TESTING/(0) Start (Level 0, NDMP)

dmp Wed Aug 17 10:59:35 BST /vol/BACKUP_TESTING/(0) Options (b=128, u)

dmp Wed Aug 17 10:59:35 BST /vol/BACKUP_TESTING/(0) Snapshot (snapshot_for_backup.7279, Wed Aug 17 10:59:28 BST)

dmp Wed Aug 17 10:59:35 BST /vol/BACKUP_TESTING/(0) Tape_open (ndmp)

dmp Wed Aug 17 10:59:36 BST /vol/BACKUP_TESTING/(0) Log_msg (Date of this level 0 dump: Wed Aug 17 10:59:28 2011.)

dmp Wed Aug 17 10:59:36 BST /vol/BACKUP_TESTING/(0) Log_msg (Date of last level 0 dump: the epoch.)

dmp Wed Aug 17 10:59:36 BST /vol/BACKUP_TESTING/(0) Log_msg (Dumping /vol/BACKUP_TESTING to NDMP connection)

dmp Wed Aug 17 10:59:36 BST /vol/BACKUP_TESTING/(0) Log_msg (mapping (Pass I)[regular files] )

dmp Wed Aug 17 10:59:36 BST /vol/BACKUP_TESTING/(0) Phase_change (I)

dmp Wed Aug 17 10:59:40 BST /vol/BACKUP_TESTING/(0) Log_msg (mapping (Pass II)[directories])

dmp Wed Aug 17 10:59:40 BST /vol/BACKUP_TESTING/(0) Phase_change (II)

dmp Wed Aug 17 10:59:42 BST /vol/BACKUP_TESTING/(0) Log_msg (estimated 3370608 KB.)

dmp Wed Aug 17 10:59:42 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass III) [directories])

dmp Wed Aug 17 10:59:42 BST /vol/BACKUP_TESTING/(0) Phase_change (III)

dmp Wed Aug 17 10:59:49 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass IV) [regular files])

dmp Wed Aug 17 10:59:49 BST /vol/BACKUP_TESTING/(0) Phase_change (IV)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass V) [ACLs])

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Phase_change (V)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Tape_close (ndmp)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (3437246 KB)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) End (3356 MB)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (DUMP IS DONE)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (reg inodes: 649 other inodes: 0 dirs: 111 nt dirs: 182 nt inodes: 184 acls: 47)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 1 time: 3958)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: directories dumped: 294)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: wafl directory blocks read: 297)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: average wafl directory blocks per inode: 1)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: average tape blocks per inode: 2)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3 throughput (MB sec): read 1 write 0)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase3 time spent for: reading inos 0% dumping ino 28%)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase3 dump time spent for: convert-wafl-dirs 27% lev0-ra 0%)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3 averages (usec): wafl load buf time 808 level 0 ra time 10)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: inodes dumped: 833)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: wafl data blocks read: 840844)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: average wafl data blocks per inode: 1009)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: average tape data blocks per inode: 4035)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 12 write 12)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase4 time spent for: reading inos 0% dumping inos 99%)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (# buffers of filehistory sent dir: 0 node: 0 mixed: 1)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (# times filehistory send was blocked dir: 0 node: 0)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (# filehistory flush operations dir: 0 node: 1)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (# filehistory entries dir: 984 node: 761 )

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Dir to FH entry time stats (msec) numEntries: 984 min: 0 max: 0 avg: 0 tot: 0)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Node to FH Entry time stats (msec) numEntries: 761 min: 0 max: 1 avg: <1 tot: 1)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Dir FH to NDMP Entry Time Stats (msec) numEntries: 1 min: 6 max: 6 avg: 6 tot: 6)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Node FH to NDMP Entry Time Stats (msec) numEntries: 1 min: 43 max: 43 avg: 43 tot: 43)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Tape write times (msec): average: 0 max: 46)

dmp Wed Aug 17 11:04:34 BST /vol/BACKUP_TESTING/(0) Log_msg (Tape changes: 1)

dmp Wed Aug 17 11:04:38 BST /vol/BACKUP_TESTING/(0) Log_msg (Deleting "/vol/BACKUP_TESTING/../snapshot_for_backup.7279" snapshot.)

aborzenkov
9,302 Views

Have you checked whether your tape drive is supported by your Data ONTAP version?

dpeverley
9,303 Views

it is supported

http://now.netapp.com/NOW/download/tools/tape_config/#

Hewlett-Packard Ultrium 5 (HP LTO 5 FC, FH, HH)      7.3.2 and later

sysconfig -t

    Tape drive (FSW-DEV01:10.63)  HP      Ultrium 5-SCSI

    rst0l  -  rewind device,        format is: LTO-3(ro)/4 4/800GB

    nrst0l -  no rewind device,     format is: LTO-3(ro)/4 4/800GB

    urst0l -  unload/reload device, format is: LTO-3(ro)/4 4/800GB

    rst0m  -  rewind device,        format is: LTO-3(ro)/4 8/1600GB cmp

    nrst0m -  no rewind device,     format is: LTO-3(ro)/4 8/1600GB cmp

    urst0m -  unload/reload device, format is: LTO-3(ro)/4 8/1600GB cmp

    rst0h  -  rewind device,        format is: LTO-5 1600GB

    nrst0h -  no rewind device,     format is: LTO-5 1600GB

    urst0h -  unload/reload device, format is: LTO-5 1600GB

    rst0a  -  rewind device,        format is: LTO-5 3200GB cmp

    nrst0a -  no rewind device,     format is: LTO-5 3200GB cmp

    urst0a -  unload/reload device, format is: LTO-5 3200GB cmp

dpeverley
9,303 Views

Ok so i've done some more testing using the dump command

dump 0uf nrst0a /vol/BACKUP_TESTING

dmp Wed Aug 17 11:36:40 BST /vol/BACKUP_TESTING/(0) Options (b=63, u)

dmp Wed Aug 17 11:45:55 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 6 write 6)

dump 0ufb nrst0a 64 /vol/BACKUP_TESTING

dmp Wed Aug 17 12:06:26 BST /vol/BACKUP_TESTING/(0) Options (b=64, u)

dmp Wed Aug 17 12:16:08 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 6 write 6)

dump 0ufb nrst0a 128 /vol/BACKUP_TESTING

dmp Wed Aug 17 12:27:54 BST /vol/BACKUP_TESTING/(0) Options (b=128, u)

dmp Wed Aug 17 12:32:48 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 12 write 12)

dump 0ufb nrst0a 256 /vol/BACKUP_TESTING

dmp Wed Aug 17 12:44:41 BST /vol/BACKUP_TESTING/(0) Options (b=256, u)

dmp Wed Aug 17 12:47:11 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 23 write 23)

I'm getting better throughput using larger block sizes, so is it best to use a 256K block size?

Is 23MB/s the kind of speeds other people are getting with LT05 tape drives?

aborzenkov
9,303 Views

Have you also tested without compression?

As for block size – usually the larger block size is, the better tape performance.

aborzenkov
9,303 Views

Some things that come in mind

- Check for errors on SAN ports filer/tapes are connected to.

- Try to increase blocking factor. Max supported size is 256.

- Try to use non-compressing device node. LTO5 in compressed mode has quite high minimal data speed requirements (I believe, around 95MB/s); this is barely what your system is capable of. It could fall back to start/stop leading to poor performance.

You could also try to test dump to tape directly as the first step (using different blocking factor and/or device nodes). Just to eliminate any possible NetWorker quirk.

dpeverley
9,303 Views

Ok have done as requested and done a tape dump on the netapp with 256KB block size

nrst0h -  no rewind device,     format is: LTO-5 1600GB

Command line: dump 0ufb nrst0h 256 /vol/BACKUP_TESTING

Here's the backup log

dmp Fri Sep  2 09:16:16 BST [uninitialized](0) Log_msg (creating "/vol/BACKUP_TESTING/../snapshot_for_backup.9208" snapshot.)

dmp Fri Sep  2 09:16:18 BST /vol/BACKUP_TESTING/(0) Log_msg (Using Full Volume Dump )

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Start (Level 0)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Options (b=256, u)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Snapshot (snapshot_for_backup.9208, Fri Sep  2 09:16:16 BST)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Log_msg (Dumping tape file 1 on nrst0h)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Tape_open (nrst0h)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Log_msg (Date of this level 0 dump: Fri Sep  2 09:16:16 2011.)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Log_msg (Date of last level 0 dump: the epoch.)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Log_msg (Dumping /vol/BACKUP_TESTING to nrst0h)

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Log_msg (mapping (Pass I)[regular files] )

dmp Fri Sep  2 09:16:22 BST /vol/BACKUP_TESTING/(0) Phase_change (I)

dmp Fri Sep  2 09:16:27 BST /vol/BACKUP_TESTING/(0) Log_msg (mapping (Pass II)[directories])

dmp Fri Sep  2 09:16:27 BST /vol/BACKUP_TESTING/(0) Phase_change (II)

dmp Fri Sep  2 09:16:30 BST /vol/BACKUP_TESTING/(0) Log_msg (estimated 3372238 KB.)

dmp Fri Sep  2 09:16:30 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass III) [directories])

dmp Fri Sep  2 09:16:30 BST /vol/BACKUP_TESTING/(0) Phase_change (III)

dmp Fri Sep  2 09:16:31 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass IV) [regular files])

dmp Fri Sep  2 09:16:31 BST /vol/BACKUP_TESTING/(0) Phase_change (IV)

dmp Fri Sep  2 09:18:54 BST /vol/BACKUP_TESTING/(0) Log_msg (dumping (Pass V) [ACLs])

dmp Fri Sep  2 09:18:54 BST /vol/BACKUP_TESTING/(0) Phase_change (V)

dmp Fri Sep  2 09:18:54 BST /vol/BACKUP_TESTING/(0) Tape_close (nrst0h)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (3374751 KB)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (This dump has written to 1 tapefile(s).)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) End (3295 MB)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (DUMP IS DONE)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (reg inodes: 661 other inodes: 0 dirs: 111 nt dirs: 183 nt inodes: 185 acls: 52)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 1 time: 5151)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: directories dumped: 295)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: wafl directory blocks read: 298)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: average wafl directory blocks per inode: 1)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3: average tape blocks per inode: 2)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3 throughput (MB sec): read 1 write 0)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase3 time spent for: reading inos 0% dumping ino 36%)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase3 dump time spent for: convert-wafl-dirs 35% lev0-ra 0%)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 3 averages (usec): wafl load buf time 1161 level 0 ra time 10)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: inodes dumped: 846)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: wafl data blocks read: 841249)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: average wafl data blocks per inode: 994)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4: average tape data blocks per inode: 3975)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Phase 4 throughput (MB sec): read 23 write 23)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Percent of phase4 time spent for: reading inos 0% dumping inos 99%)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Tape write times (msec): average: 0 max: 124)

dmp Fri Sep  2 09:19:03 BST /vol/BACKUP_TESTING/(0) Log_msg (Tape changes: 1)

dmp Fri Sep  2 09:19:04 BST /vol/BACKUP_TESTING/(0) Log_msg (Deleting "/vol/BACKUP_TESTING/../snapshot_for_backup.9208" snapshot.)

I'm getting the same 23MB sec wether compression is on or off by the looks of it.

Will try and work out how I can see the logs for the san switch i'm using.

As an aside I intially directly connected the tape drive to the netapp but it wouldn't pick up the media changer and the tape drive together, the only way I managed to get it to detect was using a san switch port.

Could there possibly be a setting i'm missing that allows me to directly fibre attach the tapes so that it detect the tape drive and the media changer lun, as I could then test without the switch involved.

aborzenkov
9,303 Views

I'm afraid I run out of ideas for the moment ...

 Could there possibly be a setting i'm missing that allows me to directly fibre attach the tapes so that it detect the tape drive and the media changer lun, as I could then test without the switch involved.

Does your library offer FC interface mode confiuration (usuall AL/Arbitrated Loop and Fabric/Point-to-Point)? In this case it has to be set to Arbitrated Loop for direct connection.

mattmusgrove1
6,988 Views

Hi,

Did you resolve this problem?  I have a similar issue.  When I directly connect a Quantum Scalar  i40 with LTO5 drives to my FAS3210 the devices are not detected. However, when connected through a fibre switch they are detected however my backup rate is a very slow 6000 KB/s.

thanks

Matt

dpeverley
7,022 Views

Hi There thanks for ecveryones help on this one,

Managed to get this working there were a couple of changes that were required to address the 2 separate issues.

Can't directly attach the the tape drives to the netapp - this was resolved by setting the topology for the tape drives on the i80 (webconsole>Setup>Drive Settings) console to 'Loop (L)' rebooted the i80 and directly attched to the nas picked up correctly.

Slow performance with the backups - set networker to use 256KB block size for each drive and also disabled the HP network teaming on the server, it was set to aggregate 2x 1gb links into 1 when I set this to be a redundant failover pair the backup speeds and failed backups were all resolved.

backup speed has been good with no failures since.

Public