Hello @dormelchen,
The VMware best practices with ONTAP are documented in TR-4333, however we don't generally make recommendations on the size of datastores because it is dependent on many other things in your particular deployment. For example:
- Are you using post-process deduplication or compression? If so, are the tasks able to complete in a timeframe which is acceptable for your environment?
- Are you backing up the entire datastore as a single entity (whether from the NetApp or from VMware)? If so, is the backup job completing within it's assigned window?
- Are you replicating the volume (SnapMirror/SnapVault)? If so, are the replication task(s) completing in an acceptable timeframe?
Each of these would be something that may indicate the volume is either a) too large, or b) has too much data churn. Reducing the size of the volume would help if the issue is the sheer number of VMs creating/changing lots of data. If you have a small set of VMs which are creating the churn then it may be necessary to spread them across more datastores.
There are other considerations as well:
- At what granularity would you want to perform a restore operation? If the volumes are very large then restoring from tape or another source may take more time than your SLA allows.
- Is the size of your volumes/datastores acceptable for a volume move operation in order to alleviate resource contention on a particular controller?
This is really just scratching the surface. There is nothing inherently wrong with having a very large datastore (20, 50, 80+ TB), just be aware of the ramifications. I have seen large NFS datastores used many times very effectively to provide storage for a small number of VMs with large capacity requirements but low IOPS requirements. Conversely, I've seen a VM with less than 100GB of storage consume all available IOPS on a very large hybrid aggregate and negatively impact all of the other volumes hosted by that aggregate (SIOC and QoS are great tools for controlling this).
Regarding VAAI, if you're using a block protocol it most likely is already enabled. If you're using NFS you will need to install the NFS VAAI vib to your hosts. To my knowledge, there are no risks to enabling and using VAAI.
For the block protocols VAAI is critical for ensuring that VMFS scalability is a very minor issue by significantly reducing the number of locks which must be acquired on the datastore, thus increasing the amount of time the LUN is serving data and not locked for VMFS metadata operations. There are other benefits as well, for example offloaded clone and copy operations.
When using VAAI with NFS the primary benefit is the offloading of clone operations. A VMDK clone operation in the datastore becomes a single file FlexClone (a.k.a. SIS clone) on the ONTAP side, taking only a second or two regardless of the size of the VMDK.
You can get more detailed information about what is and isn't supported for VAAI and each protocol here.
Hope that helps, please let me know if you have any other questions!
Andrew
If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.