Hi Christofer, I use a combination of tools to determine the general performance of the filer without getting into details.
Logging onto the console and running the command sysstat -usx 1, can give you a good feel what is happening (beware these are averages). Things like disk utilization, CPU, CP type, ops etc.
Further to this you can run commands such as statit -b (start collecting stats) and statit -e (end collecting stats). Give you a lot of granular info which would require a little more experience to read & understand (essentiatly what a perf stat does).
We use nfs a fair bit, so nfsstat -d is another good one (reset the stats first nfsstat -z)
Another good tool which can help you drill down is using Performance Manager, which gives you the ability to get a little more granular across your filers, including physical and logical objects. There is a good graphical interface that can give you some historical data depending on how you set it up. From this you can set up alerts for things like latency and high disk utilization, to ensure you don't miss any underlying issues.
As you scale and have a lot more filers, you can't be expected to log onto every filer to monitor performance, that's where power shell and performance Manager become crucial in managing your infrastructure. If you can nail your alerting, it can help you in your fault finding.
I could go ON and ON, but over time you will work out what is useful and what isn't.