ONTAP Discussions

NFSShare not mounting on Vmware Esx for Ontap 9.3/9.4

Rinku02Bansal
13,995 Views

Hi All,

 

We have created NfsShare Volume on Data Vserver But not able to mount to the ESX as datastore & getting error Access Denied.

 

"NFS mount 192.168.33.81:/share1 failed: Unable to connect to NFS server."

 

But when We check-Access in Ontap, getting below output, but still not able to mount.

Cluster-Node-1::> vserver export-policy check-access -vserver vserv1 -volume share1 -client-ip 192.168.20.249 -authentication-method sys -protocol nfs3 -access-type read-write
                                                    Policy   Policy                        Rule
Path                              Policy    Owner   OwnerType               Index          Access
----------------------------- ----------   ---------   ----------                       ------           ----------
/                                    default    vserVol  volume                         1               read
/share1                        nfsPolicy share1    volume                        1               read-write
2 entries were displayed.

 

<--Export Policy--> 

Cluster-Node-1::> export-policy show
Vserver        Policy Name
--------------- -------------------
vserv1          default
vserv1          nfsPolicy
vserv1          qa

 

 

Cluster-Node-1::> export-policy rule show
                            Policy                Rule             Access            Client                  RO
Vserver                Name               Index             Protocol         Match                  Rule
------------           ---------------            ------           --------       ---------------------        ---------
vserv1                default                  1                  any         192.168.20.249            any
vserv1                nfsPolicy              1                  any           192.168.20.249           any
2 entries were displayed.

 

<VOLUME INFO>

 

Cluster-Node-1::> vol show -vserver vserv1 -volume share1 -policy nfsPolicy

Vserver Name: vserv1
Volume Name: share1
Aggregate Name: aggr1
List of Aggregates for FlexGroup Constituents: aggr1
Volume Size: 1GB
Volume Data Set ID: 1026
Volume Master Data Set ID: 2149576388
Volume State: online
Volume Style: flex
Extended Volume Style: flexvol
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: nfsPolicy
User ID: 0
Group ID: 0
Security Style: unix
UNIX Permissions: ---rwxr-xr-x
Junction Path: /share1
Junction Path Source: RW_volume
Junction Active: true
Junction Parent Volume: vserVol

 

 

Note : We are able to mount in ontap 9.0 but not in ontap 9.3/9.4 . 

 

Regards,

Rinku Bansal

 

 

 

1 ACCEPTED SOLUTION

Shanewilliams
9,772 Views

I just ran into a problem (after upgrading from 9.3P6 to 9.4P6) where I could mount an NFS volume using 4.0, but would get an access denied error when I tried with 4.1.  Changing the default export policy from RO any to RO UNIX in the OCSM (which is equivalent to setting it to "sys" in the CLI) makes NFS 4.1 mounts work again.  Interestingly, this seems work even if the export policy on the specific volume I'm mounting is any.

 

I tried to test this in ESXi 6.7 by changing the default export policy back to any and then adding an nfs datastore (as 4.1) and I can't recreate the problem.  On an Ubuntu Xenial system, though, reverting the export policy change once again results in access denied errors.

 

I should note that explicitly setting the sec=sys option to the nfs mount command also solves the issue if your default export policy uses any instead of UNIX.

View solution in original post

12 REPLIES 12

Damien_Queen
13,951 Views

Rinku02Bansal
13,893 Views

Hello Damien_Queen,

 

We have referred the link and followed things but still not able to mount. 

 

Regards,

Rinku Bansal

naveens17
13,942 Views

1. check export policy rule to see if there is a rule that match's the client

2.check the policy that is associated with volume(to see it has correct export-policy rule)

Rinku02Bansal
13,898 Views

Hello Naveen,

 

We have checked that export policy rule.& also checked policy on vol but still not able to figure out whats going wrong. is any changes between ontap 9.0 and latest 9.3/9.4? beacause we are able to mount on ontap9.0 

 

Here some outputs are given below-

 

 

Cluster-Node-1::> export-policy rule show -vserver vserv1 -policyname nfsPolicy -ruleindex 1                  <<--- Rule in nfspolicy

Vserver: vserv1
Policy Name: nfsPolicy
Rule Index: 1
Access Protocol: nfs
List of Client Match Hostnames, IP Addresses, Netgroups, or Domains: 192.168.20.247
RO Access Rule: any
RW Access Rule: any
User ID To Which Anonymous Users Are Mapped: 65534
Superuser Security Types: any
Honor SetUID Bits in SETATTR: true
Allow Creation of Devices: true

 

Cluster-Node-1::> vol show -vserver vserv1 -volume share1 -field policy                                                  <<-- nfspolicy applied on volume 
vserver volume policy
------- ------ ---------
vserv1 share1 nfsPolicy

 

Cluster-Node-1::> export-policy check-access -vserver vserv1 -volume share1 -client-ip 192.168.20.247 -authentication-method sys -protocol nfs3 -access-type read-write
                                                   Policy       Policy                 Rule
Path                               Policy Owner     Owner Type            Index            Access
----------------------------- ---------- ---------          ----------              ------            ----------
/                                    default  vserVol           volume              1                 read
/share1                         nfsPolicy share1          volume             1               read-write
2 entries were displayed.

 

 

 

 

 

Damien_Queen
13,937 Views
Check access from your host:

cluster1::*> vserver export-policy check-access -vserver vs1 -client-ip 1.2.3.4 -volume flex_vol -authentication-method sys -protocol nfs3 -access-type read

AlexDawson
13,651 Views
Hi there,

Have you checked the firewall policy on the LIF? “network interface show -vserver ... -instance”? I’ve seen it be set to mgmt instead of data a few times..

AlexDawson
13,649 Views
Oops - can you check if the client can ping the LIF?

Rinku02Bansal
13,602 Views

Hi Alex ,

 

I changed the firewall policy on dataLif to mgmt-nfs as shown below , still facing same issue.

 

Cluster-Node-1::> net interface show -vserver vserver1 -lif datalif -fields firewall-policy
(network interface show)
vserver          lif         firewall-policy
--------        -------               ---------------
vserver1     datalif          mgmt-nfs

 

 

Moreover, not able to ping 

Cluster-Node-1::> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
Cluster-Node-1-01_clus1
up/up 169.254.30.48/16 Cluster-Node-1-01
e0a true
Cluster-Node-1-01_clus2
up/up 169.254.30.58/16 Cluster-Node-1-01
e0b true
Cluster-Node-1-02_clus1
up/up 169.254.93.59/16 Cluster-Node-1-02
e0a true
Cluster-Node-1-02_clus2
up/up 169.254.105.37/16 Cluster-Node-1-02
e0b true
Cluster-Node-1
Cluster-Node-1-01_mgmt1                                                                                            <-- Node 1
up/up 192.168.32.78/20 Cluster-Node-1-01
e0c true
Cluster-Node-1-02_mgmt1
up/up 192.168.32.80/20 Cluster-Node-1-02                                                                  <-- Node 2
e0c true
cluster_mgmt up/up 192.168.32.79/20 Cluster-Node-1-01                                            <- Mgmt Node
e0c true
vserver1     datalif  up/up  192.168.32.81/20 Cluster-Node-1-02  e0d  true              <<- DataLif is Up but not able to ping this.
8 entries were displayed.

 

We are able to mount the NFS from 9.0 even not able to ping DataLif.

 

Regards,

Rinku Bansal

AlexDawson
13,574 Views

Hi there,

 

Looks like you might have a network problem. If you are using the simulator, please ensure the network interfaces are assigned to the correct VLANs/VMWare Portgroups.

 

If you are using a physical system, please consult your network team for design validation. 

 

Thanks!

Damien_Queen
10,209 Views

Try just to delete old LIF and create a new one.

moep
10,225 Views

Try to change the access rules from "any" to "sys". Change it for the default policy as well. There is some bug in the NFS auth style presentation to the client. I ran into a similar issue with RHEL 7 client.

Shanewilliams
9,773 Views

I just ran into a problem (after upgrading from 9.3P6 to 9.4P6) where I could mount an NFS volume using 4.0, but would get an access denied error when I tried with 4.1.  Changing the default export policy from RO any to RO UNIX in the OCSM (which is equivalent to setting it to "sys" in the CLI) makes NFS 4.1 mounts work again.  Interestingly, this seems work even if the export policy on the specific volume I'm mounting is any.

 

I tried to test this in ESXi 6.7 by changing the default export policy back to any and then adding an nfs datastore (as 4.1) and I can't recreate the problem.  On an Ubuntu Xenial system, though, reverting the export policy change once again results in access denied errors.

 

I should note that explicitly setting the sec=sys option to the nfs mount command also solves the issue if your default export policy uses any instead of UNIX.

Public