ONTAP Discussions
ONTAP Discussions
Hi,
Updated: Red Hat Enterprise Linux 5 x86_64...
What is the correct filesystemio_options for 11g2 running on NFS? Right now we are using "setall" and find that we run out of RAM as memory cache balloons out of control, especially during RMAN operations. Some processes fail to spawn due to memory pressure. Should this be set to directio?
Mount options are (rw,bg,hard,rsize=32768,wsize=32768,nfsvers=3,actimeo=0,nointr,timeo=600,tcp). All of our kernel tunings reflect Oracle recommendations and NetApp TR-3633.
SGA is correctly sized for our environment.
sysctl.conf:
Mem: 32187 31967 220 0 50 18736
-/+ buffers/cache: 13180 19007
Swap: 16063 0 16063
Hi,
The TR-3633 was last updated for 11gR1. For the environment you have discussed here, please refer to the metalink document ID 880989.1
Again answering your query: Setting filesystemio_options=DIRECTIO is not sufficient to achieve better performance. It is essential to keep in mind that with the directIO being set in your environment will bypass a large amount of memory previously used by the file system and then this must be appropriately handled by Oracle Buffer Cache in order to achieve the performance. In such schenario it is recommended that the DB_CACHE_SIZE parameter must be evaluated with care.
I would also encourage you to revaluate the values you have assigned to
1. kernel.shmall (total amount of shared memory pages that can be used system wide)
2. kernel.shmmax (please make sure the value for shmmax is in bytes)
In addition to this, I can see you have referenced following parameters twice with altogether different values:
1. net.ipv4.ip_local_port_range
2. net.core.rmem_max
3. net.core.wmem_max
Are you using Automatic Shared Memory Management?
Thanks, Naveen
naveenh wrote:
Hi,
The TR-3633 was last updated for 11gR1. For the environment you have discussed here, please refer to the metalink document ID 880989.1
Again answering your query: Setting filesystemio_options=DIRECTIO is not sufficient to achieve better performance. It is essential to keep in mind that with the directIO being set in your environment will bypass a large amount of memory previously used by the file system and then this must be appropriately handled by Oracle Buffer Cache in order to achieve the performance. In such schenario it is recommended that the DB_CACHE_SIZE parameter must be evaluated with care.
I would also encourage you to revaluate the values you have assigned to
1. kernel.shmall (total amount of shared memory pages that can be used system wide)
2. kernel.shmmax (please make sure the value for shmmax is in bytes)
In addition to this, I can see you have referenced following parameters twice with altogether different values:
1. net.ipv4.ip_local_port_range
2. net.core.rmem_max
3. net.core.wmem_max
Are you using Automatic Shared Memory Management?
Thanks, Naveen
Thank you. The double net.core* and net.ipv4.ip_local_port_range values were erroneously copied from another system. I have set them to the larger of the values.
Am I following kernel.shmall and kernel.shmmax correctly:
so shmmax = 64gb, which is larger than physical ram (32GB) + swap (15.5 GB).
shmall = getconf PAGE_SIZE (4096) * 4294967296 = 8TB which is larger than anything available on the system.
What would you recommend I re-evaluate them to? Please see my excessive cache value and low free memory...
Hi,
if the problem specific to RMAN operations [backup, restore and recovery] in your environment.
Some basic info that, In SGA, the large pool is responsible for RMAN specific operations, We can explore into the RMAN and large pool specific tuning,
Suggestions
LARGE_POOL_SIZE = no_of_allocated_channels * ( 16 MB + ( 4 * size_of_tape_buffer)), for backup to disk size_of_tape_buffer is 0
Regards,
Karthikeyan.N
We eventually went with dNFS and are experiencing good-to-great performance. A Red Hat kernel update with some io specific fixes resolved about 30% of our throughput issues during continued kernel NFS testing, but not the issue of slow cache flushes...