ONTAP Discussions

What is the max NDMP performance/transfer rate to be expected?

galchen
4,130 Views

Hi !

I'm suggesting a customer to stop using TSM agents for his 300 VMs on HDS, to move them to Netapp and use SMVI + NDMP instead.

Can anyone share, what is the max NDMP transfer rate expected from 6080 with 7.3 and LTO4 tapes? (the number of tapes is not limited - I can get as many as I recommended).

Thank you.

4 REPLIES 4

josef_radinger
4,130 Views

would be interested, too.

system 3020

lto4 fc-attached via ibm tsm 5.3

my measurements when dumping to null:

dmp Wed Nov 18 07:44:22 CET /vol/medispace/(0) Start (Level 0)
dmp Wed Nov 18 07:44:22 CET /vol/medispace/(0) Options (b=63)
dmp Wed Nov 18 07:44:22 CET /vol/medispace/(0) Snapshot (snapshot_for_backup.397, Wed Nov 18 07:44:21 CET)
dmp Wed Nov 18 07:44:25 CET /vol/medispace/(0) Tape_open (null)
dmp Wed Nov 18 07:44:25 CET /vol/medispace/(0) Phase_change (I)
dmp Wed Nov 18 07:45:45 CET /vol/medispace/(0) Phase_change (II)
dmp Wed Nov 18 07:46:00 CET /vol/medispace/(0) Phase_change (III)
dmp Wed Nov 18 07:46:08 CET /vol/medispace/(0) Phase_change (IV)
dmp Wed Nov 18 08:16:45 CET /vol/medispace/(0) Phase_change (V)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Tape_close (null)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) End (71252 MB)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (reg inodes: 103615 other inodes: 0 dirs: 8798 nt dirs: 0 nt inodes: 0 acls: 0)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 3: directories dumped: 8799)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 3: wafl directory blocks read: 8995)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 3: average wafl directory blocks per inode: 1)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 3: average tape blocks per inode: 1)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 3 throughput (MB sec): read 4 write 1)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Percent of phase3 time spent for: reading inos 0% dumping ino 87%)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Percent of phase3 dump time spent for: convert-wafl-dirs 83% lev0-ra 5%)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 3 averages (usec): wafl load buf time 311 level 0 ra time 23)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 4: inodes dumped: 103615)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 4: wafl data blocks read: 18217761)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 4: average wafl data blocks per inode: 175)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 4: average tape data blocks per inode: 701)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 4 throughput (MB sec): read 40 write 40)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Percent of phase4 time spent for: reading inos 0% dumping inos 99%)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Percent of phase4 dump time spent for: wafl read iovec: 70% lev0-ra 2%)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Phase 4 averages (usec): wafl read iovec time 1063 level 0 ra time 365)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Tape write times (msec): average: 0 max: 21)
dmp Wed Nov 18 08:16:49 CET /vol/medispace/(0) Log_msg (Tape changes: 1)

seems to be very slow (40MB/s)

rgraves2572
4,130 Views

Enviroment

NetApp 2050

Two X 7 disk raid dp aggregates 1TB sata drives

HP LTO-4 4GB FC

Netbackup 6.5.4 with shared storage option

NDMP dumps of SnapVault volumes 70MB to 90MB/Sec

The 2050 can only handle sending a single NDMP dump to one drive at time. If I send a second NDMP dump to another drive I experience high CPU loads, I assume the larger controllers would handle multiple drives at the same time easily.

-Robert

eric_barlier
4,130 Views

Hi,

If you are after checking throughput on the controller you could test this by doing a dump to null of a volume on the controller:

date;dump 0f null /vol/vol0

If you run the command above this will show you in details how long it takes to run a dump on vol0. I believe NDMP

uses dump for it to work. By using this you ll isolate the controller from any other limitations such as network/SAN, tape issues

etc etc. It should give you a good idea of expected throughput.

Eric

stevedegroat
4,130 Views

Make sure the VM environment has no misalignment, as I believe that is one reason for my slowness when backing up VM volumes (from SnapMirror).  I have a 3160 (v8.0.2P3) with 15K disks trying to send 24 x 1.5TB volumes out to LTO3 tapes.  I have 2 x FC ports configured as initiators and we're seeing 50-70MB/s, peak, from the system.  Broke the volumes down to run 5-6 per day, but still see them taking 20-30 hours to run (these are deduped volumes so rehydration is needed). 

This was taken just now with 6 NDMP backups running and inbound mirrors from the production systems.

CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s

                                                          in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out

99%      0      0      0       3   90206   3282  120316 128762      0  47033     5s    98%   96%  Hf   54%       3      0      0       0      0       0      0

99%      0      0      0       0   73173   2589  107754  89935      0  48908     5s    98%   90%  Zs   55%       0      0      0       0      0       0      0

97%      0      0      0       0   71027   2441  113594 110577       0  56741     5s    98%  100%  Zs   48%       0      0      0       0      0       0      0

97%      0      0      0       0   65305   2402  113220  75327      0  53022     6s    98%  100%  Zs   53%       0      0      0       0      0       0      0

98%      0      0      0       0   77251   2503  122379  95750       0  48029     6s    98%  100%  Zf   50%       0      0      0       0      0       0      0

99%      0      0      0       7   66954   2356  139913 106198      0  57149     7s    98%  100%  Zv   49%       7      0      0       0      0       0      0

99%      0      0      0     339   33515   1115  143937  54100      0  86590     7s    98%   99%  Zf   47%     339      0      0       0      0       0      0

99%      0      0      0       0   58591   1968  139635  97258      0  55979     7s    99%  100%  Zf   56%       0      0      0       0      0       0      0

99%      0      0      0       0   70017   2413  136683  80636      0  61761     7s    99%  100%  Zv   53%       0      0      0       0      0       0      0

99%      0      0      0       0   86483   2798  136521  92080       0  68402     5s    99%  100%  Zv   53%       0      0      0       0      0       0      0

99%      0      0      0       7   87185   3089  145459 137559      0  65341     6s    98%   99%  Zs   51%       7      0      0       0      0       0      0

99%      0      0      0       0  118244   3633  138931 106383       0  87924     3s    98%  100%  Zs   49%       0      0      0       0      0       0      0

97%      0      0      0       0   80027   2943  139214 143770       0  41001     5s    98%  100%  Zf   50%       0      0      0       0      0       0      0

97%      0      0      0       0   79169   2614  118847 103037       0  75111     3s    98%  100%  Hf   46%       0      0      0       0      0       0      0

97%      0      0      0       0   99457   3742  124381 169562       0  39815     3s    98%   96%  Zs   51%       0      0      0       0      0       0      0

97%      0      0      0       3  137791   4244  109982 132099       0  60395     2s    98%  100%  Hs   52%       3      0      0       0      0       0      0

Public