Problem : Backing up dirs with 100,000 small files fails. Using NetBackup and backing them up as NFS mounts on the Linux box times out. Tested NetBackup timeout settings extensively they still fail. Listing an individual file is OK but listing the entire contents suing the ls command is very slow [ 7-10 mins]. This explains why the backups fail. Backing up the same data using NDMP via NetBackup is a respectable ~100MB/s.
My understanding NetApp and a 100,00 small files while always exhibit this slow behaviour - true? Any technotes on this?
I wouldn't expect this for only 100K files, but the behavior is pretty typical, and not at all unique to NetApp. If you put a bunch of small files anywhere you'll get similar results. It's an issue with the filesystem trying to scan the number of inodes.