Cluster-Node-1::> export-policy show Vserver Policy Name --------------- ------------------- vserv1 default vserv1 nfsPolicy vserv1 qa
Cluster-Node-1::> export-policy rule show Policy Rule Access Client RO Vserver Name Index Protocol Match Rule ------------ --------------- ------ -------- --------------------- --------- vserv1 default 1 any 192.168.20.249 any vserv1 nfsPolicy 1 any 192.168.20.249 any 2 entries were displayed.
Cluster-Node-1::> vol show -vserver vserv1 -volume share1 -policy nfsPolicy
Vserver Name: vserv1 Volume Name: share1 Aggregate Name: aggr1 List of Aggregates for FlexGroup Constituents: aggr1 Volume Size: 1GB Volume Data Set ID: 1026 Volume Master Data Set ID: 2149576388 Volume State: online Volume Style: flex Extended Volume Style: flexvol Is Cluster-Mode Volume: true Is Constituent Volume: false Export Policy: nfsPolicy User ID: 0 Group ID: 0 Security Style: unix UNIX Permissions: ---rwxr-xr-x Junction Path: /share1 Junction Path Source: RW_volume Junction Active: true Junction Parent Volume: vserVol
Note : We are able to mount in ontap 9.0 but not in ontap 9.3/9.4 .
We have checked that export policy rule.& also checked policy on vol but still not able to figure out whats going wrong. is any changes between ontap 9.0 and latest 9.3/9.4? beacause we are able to mount on ontap9.0
Here some outputs are given below-
Cluster-Node-1::> export-policy rule show -vserver vserv1 -policyname nfsPolicy -ruleindex 1 <<--- Rule in nfspolicy
Vserver: vserv1 Policy Name: nfsPolicy Rule Index: 1 Access Protocol: nfs List of Client Match Hostnames, IP Addresses, Netgroups, or Domains: 192.168.20.247 RO Access Rule: any RW Access Rule: any User ID To Which Anonymous Users Are Mapped: 65534 Superuser Security Types: any Honor SetUID Bits in SETATTR: true Allow Creation of Devices: true
Try to change the access rules from "any" to "sys". Change it for the default policy as well. There is some bug in the NFS auth style presentation to the client. I ran into a similar issue with RHEL 7 client.
I just ran into a problem (after upgrading from 9.3P6 to 9.4P6) where I could mount an NFS volume using 4.0, but would get an access denied error when I tried with 4.1. Changing the default export policy from RO any to RO UNIX in the OCSM (which is equivalent to setting it to "sys" in the CLI) makes NFS 4.1 mounts work again. Interestingly, this seems work even if the export policy on the specific volume I'm mounting is any.
I tried to test this in ESXi 6.7 by changing the default export policy back to any and then adding an nfs datastore (as 4.1) and I can't recreate the problem. On an Ubuntu Xenial system, though, reverting the export policy change once again results in access denied errors.
I should note that explicitly setting the sec=sys option to the nfs mount command also solves the issue if your default export policy uses any instead of UNIX.