ONTAP Discussions

cDOT NFS export failed, reason given by server: No such file or directory

PMSBoeblingen
7,321 Views

I have a single FAS2650 running with clustered data ontap.

I created a vserver svmtest, a volume NFStest  and a policy pol_test

vserver export-policy create -vserver svmtest -policyname pol_test

Then I added 2 rules to pol_test

vserver export-policy rule create -vserver svmtest -policyname pol_test -ruleindex 1 -clientmatch @testhosts,192.168.1.0/24 -protocol nfs -rorule sys -rwrule sys   -superuser sys  -anon 65534 -allow-suid true  -allow-dev true

vserver export-policy rule create -vserver svmtest -policyname pol_test -ruleindex 2 -clientmatch hostA,hostB,hostC -protocol nfs -rorule sys -rwrule sys   -superuser none  -anon 65534 -allow-suid false  -allow-dev false

and assigned the policy to my NFStest volume

volume  modify -vserver svmtest -volume NFStest -policy pol_test

The web GUI shows everything as shown above but when I try to mount either on hostA or on a host from @testhosts or from the range 192.168.1.0/24 I get the following error:

mkdir -p /tmp/XXX
mount -vv -t nfs svmtest:/NFStest /tmp/XXX
mount.nfs: timeout set for Wed Aug  2 07:34:35 2017
mount.nfs: trying text-based options 'vers=4,addr=192.168.1.178,clientaddr=192.168.1.79'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=192.168.1.178'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.178 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.178 prog 100005 vers 3 prot UDP port 635
mount.nfs: mount(2): No such file or directory
mount.nfs: mounting svmtest:/NFStest failed, reason given by server: No such file or directory

How can I troubleshoot the issue?

1 ACCEPTED SOLUTION

PMSBoeblingen
7,300 Views

I finally found the solution myself.

volume show -vserver svmtest -volume NFStest -junction

Error: show failed: Volume NFStest in Vserver svmtest is not mounted in the namespace

So the error was that the junction was missing in the namespace.

I was struggling with the export policies and using a volume I had created some days ago not being aware that the volume required a mount on the svm too.

After creating the junction the nfs mounts worked as desired.

volume   mount -vserver svmtest -volume NFStest -junction-path /NFStest

View solution in original post

2 REPLIES 2

PMSBoeblingen
7,301 Views

I finally found the solution myself.

volume show -vserver svmtest -volume NFStest -junction

Error: show failed: Volume NFStest in Vserver svmtest is not mounted in the namespace

So the error was that the junction was missing in the namespace.

I was struggling with the export policies and using a volume I had created some days ago not being aware that the volume required a mount on the svm too.

After creating the junction the nfs mounts worked as desired.

volume   mount -vserver svmtest -volume NFStest -junction-path /NFStest

mbeattie
7,292 Views

Hi,

 

I start by ensuring your data volume is mounted within the vservers namespace

 

cluster1::> volume show -vserver vserver2 -fields volume,junction-path
vserver  volume       junction-path
-------- ------------ -------------
vserver2 nfs_data_001 /nfs_data_001
vserver2 vserver2_root
                      /

Note: your data volume should be mounted within the namespace

 

If your data volume is not mounted then use the volume mount command to mount it. EG:

 

cluster1::> volume mount -vserver vserver2 -volume nfs_data_001 -junction-path /nfs_data_001

Then ensure that the export policy and policy rules applied the vservers root volume enable read access for clients.

 

cluster1::> volume show -vserver vserver2 -volume nfs_data_001 -fields volume, policy
vserver  volume       policy
-------- ------------ -------
vserver2 nfs_data_001 default cluster1::> export-policy rule show -vserver vserver2 -policyname default Policy Rule Access Client RO Vserver Name Index Protocol Match Rule ------------ --------------- ------ -------- --------------------- --------- vserver2 default 1 any 0.0.0.0/0 any

Note: as the "default" policy applies the the vservers root volume, ensure that an export policy rule exists to enable clients read access to volumes mounted in the name space

 

use the "export-policy rule create" command to create a default rule for the vservers root volume if no rules exists (which i think is the default) EG

 

cluster1::> export-policy rule create -vserver vserver2 -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule none

Hope that helps

 

/Matt

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
Public