2011-03-10 02:08 PM
We have finally managed to get our NDMP backups through EMC Network to an LTO-3 Ultirum jukebox doing something :-)
I would really apppreciate some idea of what we can expect in terms of throughput or performance. We have no experience with this environment and are simply trying to set expectations based on other peoples experience.
With a single group running against an IBM N-Series 6040, backing up a signle volume, we are seeing the tape drive writting at a rate of ~ 7MB/s, this seems a triffle slow and at this rate a 1TB volume is going to take a very long time to backup to tape.
Any comments appreciated.
2011-03-29 02:38 PM
Surely others are using NDMP to backup volumes on their filers, the complete absence of comments is dissappointing.
We tried a test by backing up the same volume, firstly using NDMP and then using a CIFS share. In both cases the backup infrastructure was exactly the same. No other activity on the filer or backup system.
NDMP -> 5-7MBytes per sec.
CIFS -> 70-80MBytes per sec.
Are we doing something wrong ? Surely NDMP should perform better than this
2011-03-29 11:02 PM
Can you pls share information on the version of Data ONTAP & the kind of data set being backed up (ie. lots of small files, large files, home directory files etc..). Also have you been able to try this backup with an LTO4 tape drive?
2011-03-29 11:09 PM
Also please let us know the version of NetWorker Software & whether the tape device is directly connected to the N Series (local backup) or to the NetWorker Server (Remote Backup).
2011-03-30 12:07 AM
The backup performance is very much data set specific.
The backup performance of 7 MB/Sec is pretty slow even for a LTO3. But we do see these numbers in volumes with high file count.
Let us say we backup a volume of 20 million small files adding up to 80 GB we see a 10 MB/Sec throughput.
The following information would be helpful:
Version of ONTAP:
Size - Vol Size
File Count - Number of inodes
Average File Size
backup.log file from /etc/log (name of the volume that is getting backed up)
2011-03-30 04:39 AM
How is the library connected to the filer? Directly via switch or via backup server? Can you ensure that the NDMP traffic flows over FC?
5-7MB/s is indeed very poor but as the other posters mention this depends on the data mix. I suggest you do some tests with big files (LUNs) for clear results.
80-100MB/s on LTO3 should be normal.
2011-03-30 12:51 PM
OK, I suppose I should have put more information into the original post.
Filer: OnTap 18.104.22.168, IBM N-Series 6040, 1TB SATA Drives with PAMII
Networker: 22.214.171.124, LT03 connected to storage node.
Data Network: 10Gbit NICs on filer -> Cisco Nexus 5010 -> Cisco4900M -> Cisco4948 -> 1 Gbit -> Storage Node. We will be building a 4x1Gbit etherchannel to the storage node shortly. (NB: The CIFS backup did max out the network and we got the expected throughput of around 70Mbyes/sec).
Volumes: being backed up are all Read-Only snapmirror targets. We have tried both a few large files and more smaller files. But it doesn't seem to make any difference. None of the targets contain large numbers of small files.
2011-04-01 10:25 AM
Please open a support case with NetApp Global Support to get an expert’s opinion on what is happening in your backup environment since NDMP performance can be affected by a variety of factors, including topology (local, remote etc.), directory structure, file size etc..
2012-04-11 07:10 AM
I have the same concerns regarding a lack of information around NDMP performance. I currently have an open case for my issue - 3160 cluster running v8.02P3 that is the dedicated for snapmirror destinations. I have controller A running strictly NFS volumes off to tape (2xFC ports configured through Cisco switches to LTO3) and she is seeing a max throughput of 145MB/s and am happy with these. Controller B on the other hand writes strictly VMware mirrors to tape, but only sees 50-75MB/s using the same FC configuration. I have 24 x 1.5TB deduplicated volumes I'm trying to send to tape. I had to break the jobs to run 4-5 volumes per day, Wed-Sun (these are Fulls only). Each volume takes 20-30 hours to complete. Have been gathering perfstats and we know there is misalignment within the VMware volumes (currently being corrected).
Looking for any counters in Performance Advisor or DFM to measure tape writes (sysstat shows this so it must be somewhere). I can measure the throughput from the Cisco switches but it would be nice to have data from the filer to show my management that NDMP is not a scalable solution. Would like to justify moving to SnapVault offsite to eliminate tape.
2012-04-11 09:25 AM
are there big differences in file counts in the volumes? That can make a big difference...or overall workload on each controller. With 8.0 and prior NDMP runs out of one core, but with 8.1 people are going to see better performance since NDMP runs across multiple cores (moved from kahuna domain) so if a cpu bottleneck that could help down the road when 8.1 is GA and you upgrade. I'm not sure about DFM counters for tape writes... but you could also dump to null to see the max speed the controller can write (bypassing tape) dump 0f /vol/volname