as you cannot properly multipath NFS datastores (you can for physical redundancy but not for link/speed aggregation, means you will always end up with a point to point speed of 1/10GBit so you have to scale with parallelism), i put as much machines in it as a 1/10GBit link can handle. then i create the next datastore for the next 1/10GBit link 😉
Besides that, i try to group machines of the same type (eg all win2k3 or all redhat linux) for a maximum deduplication results. backups using VSC 2.1 seem quite painless nowadays, no matter if its 2 or 200 machines.
I'm not a customer obviously but I can share what see in the field. I think to Thomas point it not about the size of the datastore but the number of VMs contained within. We have tested up to 16TB datastore with VMware with great sucess. However most datastores I see are obviously much smaller than that. I see the full range in the field from a few hundred GB to 4-6 TB, some even larger.
From an average VM count, most NFS customers I know are in the 50-100 VM per datastore range with 500 VM being my high water mark.
In addition to the link limitation Thomas mentioned and the VMware snapshot limitation you mentioned (if you use them with VSC, many customer do not use them)
You also have to be aware of the maximum dedupeable volume size on your controller. (at least until we are all on ONTAP 8 and it is 16TB).
Factors such as snap schedules and replication schedules may also dictate you have more datastores, thus smaller datastores.
I know, it seemed like such a simple question right?
Our customers are between 500GB and roughly 4-5TB, depending on dedup as well as the maximum single dataset their NDMP backup can handle (i wouldnt recommend 16TB datastores if you backup these on tape).