2014-08-14 07:14 AM
So I've searched high and low but have been able to find anything that documents this.
As with in 7-Mode, where the default vFiler count is 11 (and you have to modify and reboot the controllers for the change to take effect), is there any limitation on the number of SVMs per node or cluster in cDOT?
The only result I get when searching for anything related to SVM limitations, maximums, etc... is that you can manually set the maximum number of volumes per SVM, but that's it.
My first guess is that perhaps there actually isn't a configuration maximum for SVMs per node or cluster, or that it's just so far out there, that it's not worth documenting or worrying about
Any assistance is greatly appreciated, and if it's possible to cite a reference, that'd be great as well.
Solved! SEE THE SOLUTION
2014-08-14 09:53 AM
small clusters (4 Node):
Maximum number of storage virtual machines (SVMs) - NAS: 256 per Node, 512 per Cluster
Maximum number of storage virtual machines (SVMs) - SAN: 250 per Node, 500 per Cluster
for medium (8 nodes) and big (24 nodes) clusters it seems to be double the sizes.
2015-04-27 11:46 PM
So - it seems like there is no performance impact when creating like 50 SVM's on a node (because of ressource reservation from the physical node), or stuff like that?!
So its recommended to create single SVM's which hosting just some small volumes instead of creating one SVM and let that SVM host 100 volumes?!
Are there any experiences best practices, regarding the performance of the whole node?
Thank you very much for answers!