Network and Storage Protocols

NFS 4.1 (aka NFS4.1 aka NFSv4.1) Configuration for VMware vSphere 6.0 / ESXi 6.0

RCProAm
18,694 Views

I have run into a roadblok of sorts attempting to create an NFS 4.1 datastore in vSphere 6.0 on a NetApp FAS2552 running Clustered Data Ontap (CDOT) 8.3.2. NetApp seems to elude to NFS 4.1 in some documentation, however I have yet to find an actual configuration guide on how to configure NFS 4.1 for vSphere/ESXi 6.0 datastores. Similarly, the VMware compatibility matrix states that the combination of vSphere 6.0 and CDOT 8.3.2 supports NFS 4.1, but there is no guide from VMware specific to NetApp filers and NFS 4.1

 

Below is the relavent configuration I have on the NetApp CDOT 8.3.2 filer:

 

 

FAS2552::vserver nfs> show -vserver NFS41_SVM

                            Vserver: NFS41_SVM
                 General NFS Access: true
                             NFS v3: disabled
                           NFS v4.0: enabled
                       UDP Protocol: enabled
                       TCP Protocol: enabled
               Default Windows User:
                NFSv4.0 ACL Support: disabled
    NFSv4.0 Read Delegation Support: disabled
   NFSv4.0 Write Delegation Support: disabled
            NFSv4 ID Mapping Domain: defaultv4iddomain.com
NFSv4 Grace Timeout Value (in secs): 45
Preserves and Modifies NFSv4 ACL (and NTFS File Permissions in Unified Security Style): enabled
      NFSv4.1 Minor Version Support: enabled
                      Rquota Enable: disabled
       NFSv4.1 Parallel NFS Support: disabled
                NFSv4.1 ACL Support: disabled
               NFS vStorage Support: disabled
NFSv4 Support for Numeric Owner IDs: enabled
              Default Windows Group: -
    NFSv4.1 Read Delegation Support: disabled
   NFSv4.1 Write Delegation Support: disabled
                NFS Mount Root Only: enabled
                      NFS Root Only: disabled
Permitted Kerberos Encryption Types: des, des3, aes-128, aes-256
                  Showmount Enabled: disabled
Set the Protocol Used for Name Services Lookups for Exports: udp
        NFSv3 MS-DOS Client Support: disabled

 

 

FAS2552::vserver export-policy rule> show

             Policy          Rule    Access   Client                RO
Vserver      Name            Index   Protocol Match                 Rule
------------ --------------- ------  -------- --------------------- ---------
NFS41_SVM    ESXi_NFS41_Policy
                             1       nfs4     10.80.20.22           sys
NFS41_SVM    ESXi_NFS41_Policy
                             2       nfs4     10.80.20.21           sys

 

FAS2552::volume> show -vserver NFS41_SVM -volume vol3_nfs


                                   Vserver Name: NFS41_SVM
                                    Volume Name: vol3_nfs
                                 Aggregate Name: aggr1_sas_10K
                                    Volume Size: 125GB
                             Volume Data Set ID: 1037
                      Volume Master Data Set ID: 2159429313
                                   Volume State: online
                                    Volume Type: RW
                                   Volume Style: flex
                         Is Cluster-Mode Volume: true
                          Is Constituent Volume: false
                                  Export Policy: ESXi_NFS41_Policy
                                        User ID: 0
                                       Group ID: 0
                                 Security Style: unix
                               UNIX Permissions: ---rwxr-xr-x
                                  Junction Path: /vol3_nfs
                           Junction Path Source: RW_volume
                                Junction Active: true
                         Junction Parent Volume: NFS41_SVM_root
                                        Comment:
                                 Available Size: 100.00GB
                                Filesystem Size: 125GB

FAS2552::network interface> show -vserver NFS41_SVM
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
NFS41_SVM
            NFS41_SVM_nfs_lif1
                         up/up    10.80.20.251/24    FAS2552A     a1a-20  true

 

 

And I see the following in the vmkernel.log on the VMware ESXi 6.0 host when attempting to mount the datastore via NFS 4.1 (without Kerberos):

 

2016-10-27T22:07:19.207Z cpu14:34284 opID=25ab9a7c)NFS41: NFS41_VSIMountSet:402: Mount server: 10.80.20.251, port: 2049, path: /vol3_nfs, label: NFS41-Datastore, security: 1 user: , options: <none>
2016-10-27T22:07:19.207Z cpu14:34284 opID=25ab9a7c)StorageApdHandler: 982: APD Handle  Created with lock[StorageApd-0x4306cb687140]
2016-10-27T22:07:19.208Z cpu6:33546)NFS41: NFS41ProcessClusterProbeResult:3865: Reclaiming state, cluster 0x4306cb688340 [13]
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)WARNING: NFS41: NFS41FSGetRootFH:4030: Lookup vol3_nfs failed for volume NFS41-Datastore: Permission denied
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)WARNING: NFS41: NFS41FSCompleteMount:3558: NFS41FSGetRootFH failed: Permission denied
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)WARNING: NFS41: NFS41FSDoMount:4168: First attempt to mount the filesystem failed: Permission denied
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)WARNING: NFS41: NFS41_FSMount:4412: NFS41FSDoMount failed: Permission denied
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)StorageApdHandler: 1066: Freeing APD handle 0x4306cb687140 []
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)StorageApdHandler: 1150: APD Handle freed!
2016-10-27T22:07:19.208Z cpu14:34284 opID=25ab9a7c)WARNING: NFS41: NFS41_VSIMountSet:410: NFS41_FSMount failed: Permission denied

 

Also the ESXi NFS vmkernel is on the same subnet and same VLAN as the NetApp SVM LIF. There are no IP connectivity issues and I have no problem mounting NFSv3 datastores from another SVM on the same FAS2552 filer. Here is a vmkping from the ESXi 6.0 host confirming no IP connectivity issue to the SVM:

 

[root@esxihost:/tmp] vmkping 10.80.20.251
PING 10.80.20.251 (10.80.20.251): 56 data bytes
64 bytes from 10.80.20.251: icmp_seq=0 ttl=255 time=0.164 ms
64 bytes from 10.80.20.251: icmp_seq=1 ttl=255 time=0.169 ms
64 bytes from 10.80.20.251: icmp_seq=2 ttl=255 time=0.166 ms

--- 10.80.20.251 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.164/0.166/0.169 ms

 

 

Would anyone happen to have any helpful tips resolving the "Permission denied" warnings, or be able to point me to any NetApp documentation that specifically details the NFS 4.1 cofiguration required in CDOT to allow connectivity from an ESXi 6.0 host?

 

Thank you in advance,

 

Ryan

14 REPLIES 14

dennis_von_eulenburg
18,477 Views

Hi Ryan,

 

can you please post the output aof the command "vol show -fields junction-path, policy"

Naveenpusuluru
18,472 Views

Hi @RCProAm

 

Please change rorule and rwrule to any and superuser to sys.Also i have one doubt. Have you created default policy and default rule with clientmatch as 0.0.0.0/0?. Have you enabled VAAI?

Naveenpusuluru
18,463 Views

Hi @RCProAm

 

Please go through below links for the best practices .

 

7962:For ESXi NFSv4.1 Known Issues, Workarounds and Best Practices, Please refer to the following KB article: https://kb.netapp.com/support/index?page=content&id=3014621&actp=LIST
7966:NFSv4.0 is not a supported version of NFS with any of the ESXi version
 
 

RCProAm
18,429 Views

@Naveenpusuluru Thank you for your response.

 

Not attempting to use VAAI yet. I'm simply trying to connect to NFSv4.1 exports to use as VMware ESXi 6.0 datastores.

 

I actually copied the same policy that I am using without issue for the NFSv3 exports that the ESXi 6.0 hosts are connecting to. If the export policy is working for NFSv3 shouldn't it also work for exporting an NFSv4.1 volume?

 

Also, the NetApp KB article you referenced appears to only be applicable to IPv6, however I am attempting to connecte VMware ESXi hosts to NFSv4.1 exports via IPv4 as shown in the config outputs from my original post.

Naveenpusuluru
18,426 Views

Hi @RCProAm

 

Can you please post the output of below commands

 

export-policy show

 

export-policy rule show

RCProAm
18,422 Views

@Naveenpusuluru  I included the export policy output in my original post.

 

Sorry the formatting is poor. I can't seem to get the NetApp community message posts to use fixed-width font 😕

Naveenpusuluru
18,419 Views

Hi @RCProAm

 

For each and every svm you need to create a default policy and default rule with a clientmatch of 0.0.0.0/0. Please try this it will work for you.

Naveenpusuluru
11,935 Views

Please create a default export policy and default export policy rule for NFSV4.1_SVM with clientmatch of 0.0.0.0/0. It may resolve your issue. I have faced the same issue before.

RCProAm
11,929 Views

@Naveenpusuluru  Thanks for the tip about the default policy and default rule with a clientmatch of 0.0.0.0/0. I will check to make sure this exists on the NFSv4.1 storage virtual machine.

 

 

Just curious, have you successfully connected a VMware ESXi 6.0 host to a NetApp NFSv4.1 export?

Naveenpusuluru
11,909 Views

Sorry, i have never tested that. But it is the process to maintain default policy for each and every vserver.

RCProAm
18,432 Views

@dennis_von_eulenburgThank you for the response Dennis.

 

Please find the requested command output from the FAS2552 below:

 

FAS2552::> vol show -fields junction-path, policy

vserver   volume policy junction-path

--------- ------ ------ -------------

FAS25521A vol0   -      -

FAS25521B vol0   -      -

NFS41_SVM NFS41_SVM_root

                 default

                        /

NFS41_SVM vol3_nfs

                 ESXi_NFS41_Policy

                        /vol3_nfs

NFS_SVM   NFS_SVM_root

                 default

                        /

NFS_SVM   vol1_nfs

                 vmware_esxi_hosts

                        /vol1_nfs

NFS_SVM   vol2_nfs

                 vmware_esxi_hosts

                        /vol2_nfs

7 entries were displayed.

 

Note that I have 2 storage virtual machines setup. NFS_SVM is currently serving NFSv3 volumes to the VMware ESXi 6.0 hosts with no issues. NFS41_SVM is the SVM I have created in an attempt to serve NFSv4.1 volumes to the ESXi 6.0 hosts.

RCProAm
18,427 Views

I'm still kind of surprised at the seeming void of documentation from NetApp with respect to configuring NFSv4.1 exports for use as VMware vSphere/ESXi 6.0 datastores. I can't even find any mention of NetApp supporting NFSv4.1 session trunking.

 

Is anyone aware of any NetApp documentation that specifically details the NFS 4.1 cofiguration required in CDOT to allow connectivity from an ESXi 6.0 host?

dennis_von_eulenburg
11,892 Views

Sorry for being late to the Party.

 

I also thing the problem is your default export policy, which is assigned to your NFS41_SVM root Volume. If you compare the default export policy from your NFS_SVM with your NFS41_SVM policy I am pretty sure you will find the missing part by yourself. 🙂

 

Your volume vol3_nfs is mounted under the root volume NFS41_SVM_root.

 

/                                (your root volume with the default policy)

 vol3_nfs                (your nfs volume with the ESXi_NFS41_Policy)

 

Now your ESX Server want to mount the vol3_nfs .... and yes, you gave your ESX Hosts everything that he needs for that volume.

BUT he first have to look (read) inside the root volume / to see the volume vol3_nfs. At this point your default policy appears, acts like Gandalf and tell your ESX Host "You shall not pass!". So you have to change your default policy in that way, that the ESX Hosts have read permission on the root volume. Hope that’s the solution. If not, please let me know.

 

 

I have one customer that is using NFS4.1 with ESXi6. Please keep in mind, that there are some features that are not available with NFS4.1 (Storage DRS, Storage I/O Control, SRM and Virtual Volumes). Also NFS4.1 generates a higher CPU Utilisation at the storage controler.

 

 

 

aricade
11,566 Views

Had a simillar problem.  The answer to my issue was to setup a policy for the svm's root volume as well as the svm's data volume (which is the volume I wanted to add all my vm's too)

 

 

 

san1::> volume modify -vserver nfs1 -policy nfs1pol -volume nfs1_root -user 0 -group 1 -security-style unix -unix-permissions ---rwxr-xr-x -comment "nfs policy"
Volume modify successful on volume nfs1_root of Vserver nfs1.

san1::> volume modify -vserver nfs1 -policy nfs1pol -volume vol1_NFS_volume -user 0 -group 1 -security-style unix -unix-permissions ---rwxr-xr-x -comment "nfs policy"
Volume modify successful on volume vol1_NFS_volume of Vserver nfs1.

 

You need to give permission for the esxi host to mount the root "/"  and not just the volume "/vol1_NFS_volume".

 

That fixed it for me..

Public