I have run into a roadblok of sorts attempting to create an NFS 4.1 datastore in vSphere 6.0 on a NetApp FAS2552 running Clustered Data Ontap (CDOT) 8.3.2. NetApp seems to elude to NFS 4.1 in some documentation, however I have yet to find an actual configuration guide on how to configure NFS 4.1 for vSphere/ESXi 6.0 datastores. Similarly, the VMware compatibility matrix states that the combination of vSphere 6.0 and CDOT 8.3.2 supports NFS 4.1, but there is no guide from VMware specific to NetApp filers and NFS 4.1
Below is the relavent configuration I have on the NetApp CDOT 8.3.2 filer:
FAS2552::vserver nfs> show -vserver NFS41_SVM
Vserver: NFS41_SVM General NFS Access: true NFS v3: disabled NFS v4.0: enabled UDP Protocol: enabled TCP Protocol: enabled Default Windows User: NFSv4.0 ACL Support: disabled NFSv4.0 Read Delegation Support: disabled NFSv4.0 Write Delegation Support: disabled NFSv4 ID Mapping Domain: defaultv4iddomain.com NFSv4 Grace Timeout Value (in secs): 45 Preserves and Modifies NFSv4 ACL (and NTFS File Permissions in Unified Security Style): enabled NFSv4.1 Minor Version Support: enabled Rquota Enable: disabled NFSv4.1 Parallel NFS Support: disabled NFSv4.1 ACL Support: disabled NFS vStorage Support: disabled NFSv4 Support for Numeric Owner IDs: enabled Default Windows Group: - NFSv4.1 Read Delegation Support: disabled NFSv4.1 Write Delegation Support: disabled NFS Mount Root Only: enabled NFS Root Only: disabled Permitted Kerberos Encryption Types: des, des3, aes-128, aes-256 Showmount Enabled: disabled Set the Protocol Used for Name Services Lookups for Exports: udp NFSv3 MS-DOS Client Support: disabled
FAS2552::vserver export-policy rule> show
Policy Rule Access Client RO Vserver Name Index Protocol Match Rule ------------ --------------- ------ -------- --------------------- --------- NFS41_SVM ESXi_NFS41_Policy 1 nfs4 10.80.20.22 sys NFS41_SVM ESXi_NFS41_Policy 2 nfs4 10.80.20.21 sys
FAS2552::volume> show -vserver NFS41_SVM -volume vol3_nfs
Vserver Name: NFS41_SVM Volume Name: vol3_nfs Aggregate Name: aggr1_sas_10K Volume Size: 125GB Volume Data Set ID: 1037 Volume Master Data Set ID: 2159429313 Volume State: online Volume Type: RW Volume Style: flex Is Cluster-Mode Volume: true Is Constituent Volume: false Export Policy: ESXi_NFS41_Policy User ID: 0 Group ID: 0 Security Style: unix UNIX Permissions: ---rwxr-xr-x Junction Path: /vol3_nfs Junction Path Source: RW_volume Junction Active: true Junction Parent Volume: NFS41_SVM_root Comment: Available Size: 100.00GB Filesystem Size: 125GB
FAS2552::network interface> show -vserver NFS41_SVM Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- NFS41_SVM NFS41_SVM_nfs_lif1 up/up 10.80.20.251/24 FAS2552A a1a-20 true
And I see the following in the vmkernel.log on the VMware ESXi 6.0 host when attempting to mount the datastore via NFS 4.1 (without Kerberos):
Also the ESXi NFS vmkernel is on the same subnet and same VLAN as the NetApp SVM LIF. There are no IP connectivity issues and I have no problem mounting NFSv3 datastores from another SVM on the same FAS2552 filer. Here is a vmkping from the ESXi 6.0 host confirming no IP connectivity issue to the SVM:
[root@esxihost:/tmp] vmkping 10.80.20.251 PING 10.80.20.251 (10.80.20.251): 56 data bytes 64 bytes from 10.80.20.251: icmp_seq=0 ttl=255 time=0.164 ms 64 bytes from 10.80.20.251: icmp_seq=1 ttl=255 time=0.169 ms 64 bytes from 10.80.20.251: icmp_seq=2 ttl=255 time=0.166 ms
--- 10.80.20.251 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.164/0.166/0.169 ms
Would anyone happen to have any helpful tips resolving the "Permission denied" warnings, or be able to point me to any NetApp documentation that specifically details the NFS 4.1 cofiguration required in CDOT to allow connectivity from an ESXi 6.0 host?
Please find the requested command output from the FAS2552 below:
FAS2552::> vol show -fields junction-path, policy
vserver volume policy junction-path
--------- ------ ------ -------------
FAS25521A vol0 - -
FAS25521B vol0 - -
7 entries were displayed.
Note that I have 2 storage virtual machines setup. NFS_SVM is currently serving NFSv3 volumes to the VMware ESXi 6.0 hosts with no issues. NFS41_SVM is the SVM I have created in an attempt to serve NFSv4.1 volumes to the ESXi 6.0 hosts.
Not attempting to use VAAI yet. I'm simply trying to connect to NFSv4.1 exports to use as VMware ESXi 6.0 datastores.
I actually copied the same policy that I am using without issue for the NFSv3 exports that the ESXi 6.0 hosts are connecting to. If the export policy is working for NFSv3 shouldn't it also work for exporting an NFSv4.1 volume?
Also, the NetApp KB article you referenced appears to only be applicable to IPv6, however I am attempting to connecte VMware ESXi hosts to NFSv4.1 exports via IPv4 as shown in the config outputs from my original post.
I'm still kind of surprised at the seeming void of documentation from NetApp with respect to configuring NFSv4.1 exports for use as VMware vSphere/ESXi 6.0 datastores. I can't even find any mention of NetApp supporting NFSv4.1 session trunking.
Is anyone aware of any NetApp documentation that specifically details the NFS 4.1 cofiguration required in CDOT to allow connectivity from an ESXi 6.0 host?
I also thing the problem is your default export policy, which is assigned to your NFS41_SVM root Volume. If you compare the default export policy from your NFS_SVM with your NFS41_SVM policy I am pretty sure you will find the missing part by yourself. 🙂
Your volume vol3_nfs is mounted under the root volume NFS41_SVM_root.
/ (your root volume with the default policy)
vol3_nfs (your nfs volume with the ESXi_NFS41_Policy)
Now your ESX Server want to mount the vol3_nfs .... and yes, you gave your ESX Hosts everything that he needs for that volume.
BUT he first have to look (read) inside the root volume / to see the volume vol3_nfs. At this point your default policy appears, acts like Gandalf and tell your ESX Host "You shall not pass!". So you have to change your default policy in that way, that the ESX Hosts have read permission on the root volume. Hope that’s the solution. If not, please let me know.
I have one customer that is using NFS4.1 with ESXi6. Please keep in mind, that there are some features that are not available with NFS4.1 (Storage DRS, Storage I/O Control, SRM and Virtual Volumes). Also NFS4.1 generates a higher CPU Utilisation at the storage controler.