Anyone have any implementation guide for dNFS on RHEL?
Specifically are the mount options required on the Linux host for the /etc/fstab or the /etc/mtab?
So far from what i read from Oracle documents, is that these mount options are not relevant.
From TR 3633:
Direct NFS Configuration
The first step to using the Direct NFS client is to make sure that all of the Oracle Database files residing on the NetApp storage system volumes are mounted using kernel NFS mounts. Direct NFS does not require any special NFS mount options. However, it needs the rsize and wsize NFS mount options to be set to 32768 (32K) as the max value of DB_BLOCK_SIZE can be 32K.
Yes, you must follow the kb article to setup the mount points and mount the volumes with the required options. It is immaterial whether you are using DNFS or not, the mount points should be created and volumes must be mounted by the OS commands. DNFS makes use of oranfstab to start operating on the database files and can be checked by quering v$dnfs... views(v$dnfs_servers, v$dnfs_channels, v$dnfs_files, v$dnfs_stats). However you still require the mount points at all times.
For implementation I would suggest you to follow this general practice:
1. create a file 'oranfstab' in one of the following locations:
The first one from the top would be picked by Oracle instance for it's use.
2. Then modify the above file to something like this:
server: FASController1 <-- This is just a name you can keep it anything
local: 10.238.162.233 <-- Local IP address of the first ethernet interface that will be used for NFS, find this out from ifconfig -a
path: 10.238.162.234 <-- IP address of the NFS Server which is NetApp Controller, find this out from System Manager or through SSH connection to Storage
local: 10.239.162.233 <-- Local IP address of the second ethernet interface that will be used for NFS, find this out from ifconfig -a
path: 10.239.162.234 <-- IP address of the NFS Server which is NetApp Controller, find this out from System Manager or through SSH connection to Storage
export: /vol/oradata1 mount: /mnt/oradata1
3. And then you can run the following commands to enable ODM library:
$ cd $ORACLE_HOME/lib
$ cp libodm11.so libodm11.so_stub
$ ln -s libnfsodm11.so libodm11.so
That's all you have to do. Bounce back the instance and you should see from the v$dnfs views that the instance is now using DNFS.
ADR refers to the Automatic Diagnostic Repository where all your logs are stored to diagnose. I think you can safely ignore this if you are not running 11gR1. There were special requirements of having a different mount option for storing diagnostic logs on the NFS share for this release however that is not applicable for any other release. Does that help? Please don't forget to mark this question as "Answered" if you all your queries are resolved to your satisfaction.
No. Oranfstab would still be required even though you don't plan to use multiple paths. For the second portion of your question for linux with RAC the answer is yes, you should be good to go with the above options.
short question about your last post here. Is there a specific reason why it is necessary to use a oranfstab even if there is no multipath configured? We think about to remove the oranfstab completely and only create one if we need to configure multipath.
I actually try to create a new volume/mountpoint structure for SMO and until now I don't see any problem. It seems to be enough when the (d)nfs-mountpoints of the db are configured in the /etc/fstab (OS: OEL 6.2; DB: > 184.108.40.206.6; dnfs)