In my environment, I have multi terra bytes of file data which I access using NFS from the grid. Now average file size is less than 16k. Just to give you an idea, the 80% of files are <16k, and 50% of the files are 4kb or less, as a result my performance figues are not that great. Most of the data are write intrisic, and since Netapp do all sequential writes, so wondering how can i improve my storage performance further. The bottle neck is huge metadata becuase of so many files and I have crazy inodes figures. network bandwidth is not an issue becuase of the file size.
Can someone who have experienced similar enviroment can give me some insight for tuning any parameters,which will improve the storage performance. Also is there any upper limits of NFS IOPS that 3200 series filers can handle without PAM module, also will the PAMII module be some some help to me, becuase of huge metadat the filer needs to handle even though the read operations are much less compared to writes.
In environments like this (another grid customer) we used FlashCache (PAM) and turned off normal blocks leaving it in MetaData only mode. So the FlashCache can be set to metadata only which made a big difference compared to both data and metadata.
You can run predictive cache statistics (PCS) to see the benefit of doing this by running in both modes (there is a third lopri mode but doesn't apply here) and work with your NetApp team to analyze the results to see if FlashCache makes sense... PCS does take some resources but good to see if it helps and I bet it does. You can also set the predictive cache size to simulate the amount of cache you add (1 or 2x 512GB for example depending on your model...or even the new 1TB flashcache now if supported on your model).
With the workload you have (write intensive) I'd be curious to see your disk utilization. This can be viewed within performance advisor or by running the stats commands from the filer. I'd be curious to specifically see the output from this command:
filer> stats show disk:*:disk_busy
Also, how many disks do you have within your aggregate? What aggregate raid group size? What speed & type (FC, SAS, SATA)?