Subscribe

Oracle 11g2 NetApp NFS

[ Edited ]

Hi,

 

Updated: Red Hat Enterprise Linux 5 x86_64...

 

What is the correct filesystemio_options for 11g2 running on NFS?  Right now we are using "setall" and find that we run out of RAM as memory cache balloons out of control, especially during RMAN operations.  Some processes fail to spawn due to memory pressure.  Should this be set to directio?

 

Mount options are (rw,bg,hard,rsize=32768,wsize=32768,nfsvers=3,actimeo=0,nointr,timeo=600,tcp).  All of our kernel tunings reflect Oracle recommendations and NetApp TR-3633.

 

SGA is correctly sized for our environment.

 

sysctl.conf:

 

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
net.ipv4.ip_local_port_range = 9000 65500
fs.aio-max-nr = 1048576
vm.nr_hugepages = 6000
sunrpc.tcp_slot_table_entries = 128
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 262144 16777216
net.ipv4.tcp_wmem = 4096 262144 16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_syncookies = 0
 
free -m
           total       used       free     shared    buffers     cached

 

Mem:         32187      31967        220          0         50      18736

-/+ buffers/cache:      13180      19007

Swap:        16063          0      16063

 

Re: Oracle 11g2 NetApp NFS

Hi,

The TR-3633 was last updated for 11gR1. For the environment you have discussed here, please refer to the metalink document ID 880989.1

Again answering your query: Setting filesystemio_options=DIRECTIO is not sufficient to achieve better performance. It is essential to keep in mind that with the directIO being set in your environment will bypass a large amount of memory previously used by the file system and then this must be appropriately handled by Oracle Buffer Cache in order to achieve the performance. In such schenario it is recommended that the DB_CACHE_SIZE parameter must be evaluated with care.

I would also encourage you to revaluate the values you have assigned to

1. kernel.shmall (total amount of shared memory pages that can be used system wide)

2. kernel.shmmax (please make sure the value for shmmax is in bytes)

In addition to this, I can see you have referenced following parameters twice with altogether different values:

1. net.ipv4.ip_local_port_range

2. net.core.rmem_max

3. net.core.wmem_max

Are you using Automatic Shared Memory Management?

Thanks, Naveen

Re: Oracle 11g2 NetApp NFS

naveenh wrote:

Hi,

The TR-3633 was last updated for 11gR1. For the environment you have discussed here, please refer to the metalink document ID 880989.1

Again answering your query: Setting filesystemio_options=DIRECTIO is not sufficient to achieve better performance. It is essential to keep in mind that with the directIO being set in your environment will bypass a large amount of memory previously used by the file system and then this must be appropriately handled by Oracle Buffer Cache in order to achieve the performance. In such schenario it is recommended that the DB_CACHE_SIZE parameter must be evaluated with care.

I would also encourage you to revaluate the values you have assigned to

1. kernel.shmall (total amount of shared memory pages that can be used system wide)

2. kernel.shmmax (please make sure the value for shmmax is in bytes)

In addition to this, I can see you have referenced following parameters twice with altogether different values:

1. net.ipv4.ip_local_port_range

2. net.core.rmem_max

3. net.core.wmem_max

Are you using Automatic Shared Memory Management?

Thanks, Naveen

Thank you.  The double net.core* and  net.ipv4.ip_local_port_range values were erroneously copied from another system.  I have set them to the larger of the values.

Am I following kernel.shmall and kernel.shmmax correctly:

so shmmax = 64gb, which is larger than physical ram (32GB) + swap (15.5 GB).

    shmall = getconf PAGE_SIZE (4096) * 4294967296 = 8TB which is larger than anything available on the system.

What would you recommend I re-evaluate them to?  Please see my excessive cache value and low free memory...

# cat /proc/meminfo
MemTotal:     32959788 kB
MemFree:        260096 kB
Buffers:         30424 kB
Cached:       19188524 kB
SwapCached:          0 kB
Active:        1124972 kB
Inactive:     19100944 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:     32959788 kB
LowFree:        260096 kB
SwapTotal:    16449528 kB
SwapFree:     16449212 kB
Dirty:              64 kB
Writeback:           0 kB
AnonPages:     1023076 kB
Mapped:          67416 kB
Slab:            94804 kB
PageTables:      48164 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:  26785420 kB
Committed_AS:  3420744 kB
VmallocTotal: 34359738367 kB
VmallocUsed:    270868 kB
VmallocChunk: 34359466839 kB
HugePages_Total:  6000
HugePages_Free:   1975
HugePages_Rsvd:    916
Hugepagesize:     2048 kB

Re: Oracle 11g2 NetApp NFS

Hi,

if the problem specific to RMAN operations [backup, restore and recovery] in your environment.

Some basic info that, In SGA, the large pool is responsible for RMAN specific operations, We can explore into the RMAN and large pool specific tuning,

Suggestions

  1. If the Automatic shared memory management enabled, the large pool management is automatic based on system workload.
  2. Check the RATE parameter (max number of bytes)limit that RMAN read each second on the channel, which should not degrade the performance and disk bandwidth.
  3. Check the DBWR_IO_SLAVES values, For slaves the I/O buffers are obtained from (shared pool) SGA, if large pool not configured.
  4. Also check the PROCESSES initialization parameter, which have enough values. because when DBWR_IO_SLAVES configured the DBWR also uses the slaves for move the dirty data to disk
  5. It's recommended to configure LARGE_POOL_SIZE for RMAN operations, otherwise the memory allocated from shared pool is small, such as 5KB in size.
  6. Check the Pool colum from the "V$SGASTAT" dictionary, which provides how much large pool uses from memory and make sure the value are based on this formula

           LARGE_POOL_SIZE = no_of_allocated_channels * ( 16 MB + ( 4 * size_of_tape_buffer)), for backup to disk size_of_tape_buffer is 0

Regards,

Karthikeyan.N

Re: Oracle 11g2 NetApp NFS

We eventually went with dNFS and are experiencing good-to-great performance.  A Red Hat kernel update with some io specific fixes resolved about 30% of our throughput issues during continued kernel NFS testing, but not the issue of slow cache flushes...