2011-06-07 08:16 AM
we have recently upgraded from a 3020c to a 2040c. We been doing backups of a volume that has millions of files via NDMP. This was also an issue on the old System. The Volume size is 2TB, 1739GB are in use and backups were almost impossible. We are using backupexec (latest) and Data Ontap 8.0.1P4. The backup is so extremely slow (maximum 260Mbyte/s). We backup several Volumes and it's really just this volume that is causing us problems. I wonder if this is really related to the millions of files???
Is there any command on the netapp to list the number of files inside the volume except enabling quotas?
2011-06-07 12:50 PM
Yes, NDMP is not going to be your best bet on file systems like that. NDMP, even if it has been optimized the last few years, is really still a simple filesystem dump. Fishing first through all 17M files and directories (which is how NDMP starts) takes a small eternity. Then the transfer speed is not that bad, but you probably are digging through a lot more data than you need. You have a few options, but nothing is free:
1) Split up the volume into smaller volumes and change how things are mounted on your serves so that things look the same. Then you can backup more quickly with multiple systems at the same time... up to a point, at least
2) Use a NFS/CIFS client based backup. This may be no improvement at all, however.
3) Use a snapshot "aware" backup like TSM or NetBackup. TSM can actually do CIFS (and NFS) based backup from just an list of file system changes that it gets from the filer... sort of an abreviated NDMP backup.
4) Archiving solutions which remove older unused data and archive it on separate systems.