Data Backup and Recovery

Mounting a FlexClone volume fails in NFS environment export.auto.update.disabled

James_Pohl
4,353 Views

SMO latest
Snapdrive latest

NFS V3

Oracle Linux 6.5

Ontap 8.1.4 7-mode

 

In the SMO 3.3 documentation I found the following

 

Mounting a FlexClone volume fails in NFS environment
When SnapManager creates a FlexClone of a volume in NFS environment, an entry is added in
the /etc/exports file. The clone or backup fails to mount on a SnapManager host with an error
message.

 

The error message is:
0001-034 Command error: mount failed: mount: filer1:/vol/
SnapManager_20090914112850837_vol14 on /opt/NTAPsmo/mnt/-
ora_data02-20090914112850735_1 - WARNING unknown option "zone=vol14" nfs
mount: filer1:/vol/SnapManager_20090914112850837_vol14: Permission denied.

 

At the same time, the following message is generated at the storage system console:

 

Mon Sep 14 23:58:37 PDT [filer1: export.auto.update.disabled:
warning]: /etc/exports was not updated for vol14 when the vol clone create
command was run. Please either manually update /etc/exports or copy /etc/
exports.new to it.

 

This message might not be captured in the AutoSupport messages.
Note: You might encounter similar issue while cloning FlexVol volumes on NFS. You can follow
the same steps to enable the nfs.export.auto-update option.

 

What to do

 

1. Set the nfs.export.auto-update option on so that the /etc/exports file is updated
automatically.
Troubleshooting SnapManager for Oracle | 431
options nfs.export.auto-update on
Note: In an configuration, ensure you set the NFS exports option on for both the storage systems.

 

 

The issue is that I can't have new volumes automatically exported to all hosts. Does anyone have a work around that would allow SMO to verify/restore Flexclone backups without turning on   

 

options nfs.export.auto-update on

 

 

4 REPLIES 4

rwelshman
4,345 Views

Not an answer to your specific question, but with regards to the default export for a new volume - it takes it from the export from vol0 (root volume). So if you restrict vol0, it would restrict new volumes until you had a chance to either remove the export or adjust it accordingly.

For the FlexClone process, if auto update is enabled, I believe it should set the export to the same as the parent volume.

georgevj
4,323 Views

to clarify this,

 

1. this option not only affects the behaviour of newly created volumes, but it also affects the operations such as volume rename,destroy etc. If this option is set, it will update /etc/exports file during any of these operations.

2. Newly created volumes not necessarily inherit their export options from the root vol. Especially if the root's options were manually modified after its creation. in fact, if an admin host was specified during the initial setup, volume created will have export options set like rw=adminhost,root=adminhost, etc.

3. Admin host entry can be found in /etc/hosts.equiv file. If no admin host is configured, the default is to export the volume to all hosts with an entry like "rw" without any host or network specification.

4. Most importantly, While creating a clone, the clone will inherit its parent volume's export options. This can be changed later.

 

So, it should be perfectly okay to enable this option as it won't adversly affect the newly created clone's export options.

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
Cannot find the answer you need? No need to open a support case - just CHAT and we’ll handle it for you.

James_Pohl
4,305 Views

Thanks for the update.

 

What you describe is not what I am experiencing.

 

 

I have 

sancontrol-01 etc]# grep vol0 exports

 

/vol/vol0 -sec=sys,rw=192.168.252.10:192.168.145.255:sancontrol-01.example.ca,anon=0,nosuid

 

sancontrol-01 etc]# cat hosts.equiv
#Auto-generated by setup Sat Dec 19 00:30:03 GMT 2009
sancontrol-01

 

 

When any volume was created including clones we would see a corresponding export 

 

 

/vol/vol_23042015_085125 -sec=sys,rw,nosuid

 

and a clone of the same volume.

 

/vol/vol_23042015_085125_clone_23042015_085227 -sec=sys,rw,nosuid

 

 

Note that /vol0 isn't on the same aggregate as the new volume

 

 

sangw1> vol status vol_23042015_085125
Volume State Status Options
vol_23042015_085125 online raid0, flex create_ucode=on, convert_ucode=on,
64-bit guarantee=none, fractional_reserve=0
Volume has clones: vol_23042015_085125_clone_23042015_085227
Volume UUID: 9ef2ca3d-e9d0-11e4-bf57-123478563412
Containing aggregate: 'aggr3_sata'

 

sangw1> vol status vol_23042015_085125_clone_23042015_085227
Volume State Status Options
vol_23042015_085125_clone_23042015_085227 online raid0, flex create_ucode=on, convert_ucode=on,
64-bit guarantee=none, fractional_reserve=0
Clone, backed by volume 'vol_23042015_085125', snapshot 'clone_vol_23042015_085.1'
Volume UUID: c1a68f55-e9d0-11e4-bf57-123478563412
Containing aggregate: 'aggr3_sata'

 

 

UNBC
4,207 Views

After talking with support, there is only the option to turn "options nfs.export.auto-update on"

 

Thanks to all that replied

Public