One of our volumes is currently 16.7TB. This strikes me as rather big and potentially dangerous down the line. I was wondering how we should go about breaking it up to a more manageable size without continually using NDMP
If the concern is regarding the 'backup' (NDMP) then 16TB single volume can be a pain (especially if its taking longer and longer to finish). NDMP in nature can be lengthy process especially if file-history is enabled and volume is packed with millions of small files in a deep directory structure. In such a situation, in order to reduce the NDMP backup time, it is suggestable to break the volume or rather move the dense directories to another volume (under qtrees). Having data distributed in qtrees will ensure the NDMP can use multi-streams for each qtree and also be able to use 'inode-file-map' for full/incremental back instead of examining each file by file.