I really hope someone has came across this in the past.
We have ESX server located in Boston. This Tuesday all three NFS Volumes became inaccessible to the ESX host. To my knowledge there has been no changes to ESX host server and NetApp. Nothing was changed.
I have checked the export list on the Filer and it checks out fine. I am not quite sure what could be cause of this. I checked the Qtree and it is set to UNIX.
From the ESX host i can see all security permission and they look okay:
[root@bosvi4blsrvxx BOSVMLUN2]$ less -SRi /var/log/vmkernel
Double check the output of the 'exportfs' and compare it to the output you listed.
Also, check the vmKernel interfaces in ESX. Are there any other vmKernel interfaces on the 10.40.2.0 network? Its possible that ESX is trying to mount the export using an interface with an ip address other than 10.40.2.100.
Lastly, in my experience, all "strange" NFS behaviors (ie: nothing has changed, but its now broken) seem to be name service related. Are your datastores mounted using ip addresses or host names? Is it possible that DNS records have changed for the host or the filer?