Hi. I have a question about performance accessing a large number of small files on NetApp storage.
I'm considering the best way to store circa 3 million files, most of which will be just over 1k, that are accessed by virtual mahines. This will be a low-write high-read dataset. The VMs are on VMWare ESX datastores that are on NetApp storage and accessed over NFS. My alternatives are to either store all the small files in a VMDK (on a filesystem with 1k block size), or put them on a dedicated NetApp volume accessed directly by the VM over NFS.
With WAFL using 4k blocks, there will be some overhead on both storage size and read speed if accessed directly, but presumably there'll be the same overhead if they're in a VMDK. Are there any NetApp recommendations for how to best handle this type of dataset?
Thanks in advance for any advice ....
Jon