First, I hope I'm in the right area - I dont this too often.
I have a FAS2240 dual head filer, and an EMC Networker backup system with FC attached LTO6 Quamtum i80 library.
I run NDMP backups of the filer volumes, via the Networker server, and cannot get the backup rates above 150MBytes/sec. I figure I should get this at least on each tape drive (there are 2 in the library)
To overcome the LAN speed limit, I have configured 2 cables directly from the filer to the backup server, and configured the system so that the NDMP backup is sent via this aggregate (which is running in round robbin mode from the filers side - the load on these 2 interfaces is sharing correctly).
The FC connection from Networker server (dedicated) to the library is via 4GBit HP/Brocade SAN/Switch. This device never shows utilisation above about 20%, so I figure its not part of the problem.
Without the load balancing net, the backups limit at abut 140MBytes/sec
The NDMP client in Networker is configured to do smtape type volume backups. 'dump' type also works and is a little slower.
During the backup, the Netapp CPU doesn't get above about 40%-50% max.
I figure that with 2 ethernet lines, I should be able to backup at least about 180MBytes/sec. MTU on this link is set to 9000.
Anyone have any ideas - I think I played with every possible setting by now.
To test the fastest you can read off of disk with ndmp you can run "dump 0uf null /vol/volname" and see the speed since it writes to null. Clean up snapshot after and delete it...or ctrl-c when you get a good sample of the read speed. You can also run "snapmirror store volname null" for a similar test to null with smtape (depending on your ontap version the snapmirror to tape command may be different).
The backup is a single stream per volume. If gigabit then a single stream is throttled at the gig network. Even if 2 backups concurrently each individual one can't go faster than one wire each. The link aggregation won't take one backup stream over both interfaces concurrently unfortunately.
Not sure on the exact hardware config on the 2240. Currently I have 4 Gig eth ports and 2 x 8Gbit FC ports on each filer head. In the backup room where this system is though, the san switch is only 4 Gbit model.
I guess 10GBit eth is an option, but I'm trying not to spend more than I have to. and upgrading to 10GBit is not going to give me x10 performance because the LTO6 drives themselves dont do much more than 160MBytes/sec (depending on compression).
NDMP direct to the drive limits me to one session/drive and I wasn't getting great backup rates with that either. Also - it ties the drives to one filer, so I have no option to backup the few volumes from the primary system that dont get snapmirrored to the backup system (where the backups are running from).
Ndmp doesn't multiplex so source to target is a single stream to the drive. Sounds like gigabit is the bottleneck. But might be worth checking if networker can write to a null device on the media server to check throughput before tape.