Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Dirs with large number files (100k) too slow for backup - NDMP OK??
2013-12-03
03:37 PM
3,229 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Problem : Backing up dirs with 100,000 small files fails. Using NetBackup and backing them up as NFS mounts on the Linux box times out. Tested NetBackup timeout settings extensively they still fail. Listing an individual file is OK but listing the entire contents suing the ls command is very slow [ 7-10 mins]. This explains why the backups fail. Backing up the same data using NDMP via NetBackup is a respectable ~100MB/s.
My understanding NetApp and a 100,00 small files while always exhibit this slow behaviour - true? Any technotes on this?
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I wouldn't expect this for only 100K files, but the behavior is pretty typical, and not at all unique to NetApp. If you put a bunch of small files anywhere you'll get similar results. It's an issue with the filesystem trying to scan the number of inodes.
Bill