ONTAP Discussions
ONTAP Discussions
Hi,
One of our volumes is currently 16.7TB. This strikes me as rather big and potentially dangerous down the line. I was wondering how we should go about breaking it up to a more manageable size without continually using NDMP
Solved! See The Solution
Then you should go with @Ontapforrum suggestion (qtrees).
You could use the same volume, but create qtrees and manually move data to the qtrees. It ou be a migration process as you would do to mode data to another place...
Hi,
What application is using the volumes?
16TB could be big, but compared to the size it could go beyond 100TB and PBs with FlexGroup Vols, it's nothing. So why are you so concerned about the size?
Regards,
Pedro
The volumes are being used by clients and backed up by NDMP
If the concern is regarding the 'backup' (NDMP) then 16TB single volume can be a pain (especially if its taking longer and longer to finish). NDMP in nature can be lengthy process especially if file-history is enabled and volume is packed with millions of small files in a deep directory structure. In such a situation, in order to reduce the NDMP backup time, it is suggestable to break the volume or rather move the dense directories to another volume (under qtrees). Having data distributed in qtrees will ensure the NDMP can use multi-streams for each qtree and also be able to use 'inode-file-map' for full/incremental back instead of examining each file by file.
Yes, the volumes are being used by clients and backed up by NDMP
How would I go about moving the volumes? I fear the process will be extremely lengthy due to the large size of the client directories
Then you should go with @Ontapforrum suggestion (qtrees).
You could use the same volume, but create qtrees and manually move data to the qtrees. It ou be a migration process as you would do to mode data to another place...