ONTAP Discussions

creating volume best practice




Running Ontap 9.5P6.


Right now we have one volume for our entire organization’s, about 15 TBs. What I would like to do is break this volume appart by department, so in the end there will be about 20 volumes. Just wondering what the impact will be with more volumes. The only thing that comes to mind right now, is we run a nightly deDupe policy on the volume. Do I need to create another policy? Maybe split the volumes in half (10 and 10), each grouping with its own dedupe policy? Any else that may impact performance.



Hi there!


Flexible volumes can grow to 100TB, and with our FlexGroup technology, into multiple petabytes. Do you anticipate growth to beyond 100TB for your organisation in the near future? Might it just be easier to use group quotas? More details at https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-vsmg%2FGUID-42718D7C-08DB-476B-A02E-1E223A3E383D.html


Hope this helps!


100 TB is not happening in the near future.


I was more worried about the nightly tasks that will run on the volumes, if too many tasks at the same time taxing the system.


Regarding if they should all be in one policy or not - you can have all the same policy but different schedules.


More volumes means more management, and in your context, responding to organisation change would be more difficult. You also lose cross volume dedupe efficiency.


My take, as some who professionally advises people how to run their NetApp systems, is not to do it. Just use group quotas, in my opinion.


Hope this helps!


As a Perf TSE, more volumes can be helpful. It depends on the use case. I actually would disagree as one who sees performance problems all the time, but I suppose I'm jaded. 🙂


One option is you could convert to FlexGroup using ONTAP 9.7 (new feature).


Honestly, this is worth a good chat with your account team as there are advantages and disadvantages to both.


Regarding the orignal question, in theory your dedupe time should be the same, or maybe even faster because it's multiple sets of data and ONTAP loves parallelism for getting things done faster compared to serial operations. What will likely happen is if you have a single volume that takes 2 hours, it may only take 5 minutes per volume, and add up to roughly the same total 2 hours. You could run maybe one or two jobs in parallel even, or more, depending on your available performance budget (Performance Capacity) and get it done even faster.


Haha, yep, performance and parallelism is also a factor to consider.


If it's a system where this is your only workload, I assume it's a FAS2xxx - which for 25xx means only two CPU cores per controller, 26xx is 6 and 27xx is 12 (so 24 in an HA pair). Leaving a couple for ONTAP, that's the point where paralellism stops.


So it comes back to individual admin choice - do you want to play off ease of management against potential performance issues? The impact of volume count on performance comes down to a number of factors too - how many people are using the system concurrently? how many disks have you got in the system?


There's no one way to do it right or wrong without consideration of use case, as you've seen from Paul and I's discussion here 🙂


Awesome!! Thank you all for responding!


Since I have you gurus in one spot, I would like to run this by you. This all started with a crypto attack that encrypted one share\folder of the volume (the one question) and 1 departmental volume. So restoring the departmental volume was an easy role back to a previous snapshot. The share\folder of the bigger volume was a very long process. I couldn’t role back to snapshot because it only works at the volume level and going back to previous folder\file versions had issues due to long file names. So going forward the easiest method seems to create volumes per department. Any recommendations or practices that would help out?