ONTAP Discussions

wsize and rsize got dropped from settings in /etc/fstab file

netappmagic
11,107 Views

Linux Rel 6.

 

In /etc/fstab file, we defined both wsize and rsize for NFS mounting options is 131072. However, when the server is up and running the value of both got reduced to 65536 as shown by "nfsstat -m". 

 

How come? what can we do to keep the settings in /etc/fstab?

 

Thank you!

14 REPLIES 14

TMACMD
11,066 Views

What are the settings in ONTAP? the default is 64K.

There may have been a (display) bug in RHEL that was graciously showing you 128K but underneath it may have really been 64K all along.

netappmagic
11,049 Views

Thanks, but my question is about settings on NFS mount options on Linux, the server couldn’t recognize 131072 in /etc/fstab, and reduced the parameters down to 65536 as the actual mounting option.  Why?

 

 

TMACMD
11,040 Views

I’ll ask one more time....

 

 what is the setting on the nfs server?

 

 You’re telling me the client setting. Don’t care about the client right now. What is the server setting which should be a Netapp since that is the forum you’re in

netappmagic
11,036 Views

There are no such settings on NetApp NFS vserver . wsize and rsize are only on Linux 

TMACMD
10,983 Views

Oh, but you are so wrong! The options have *always* been there even going back to 7-mode.

 

They are not in standard view. Must use advanced. Then "vserver nfs show -instance" exposes them:

   UDP Maximum Transfer Size (bytes): 32768
TCP Maximum Transfer Size (bytes): 65536
NFSv3 TCP Maximum Read Size (bytes): 65536
NFSv3 TCP Maximum Write Size (bytes): 65536

And to set them:

vserver nfs modify -tcp-max-xfer-size 65536 -v3-tcp-max-read-size 65536 -v3-tcp-max-write-size 65536

 

Like I said earlier:

What are the settings in ONTAP? the default is 64K.

There may have been a (display) bug in RHEL that was graciously showing you 128K but underneath it may have really been 64K all along.

 

 

netappmagic
10,957 Views

Yes, you are correct. I have missed your point. 

 

On NetApp, the setting is 64K. Does that mean I have to change it to 128K on NetApp first, in order to make 128K effect on Linux? And do I have to change all these three parameters to 128K?

 

Thank you!

TMACMD
10,932 Views

As I mentioned earlier, even though you set it to 128K on the client, it may have been pacifying you by displaying 128K. (RHEL display bug, likely)

 

I remember a long time ago playing with 7-mode and the TCP/UDP rsize/wsize options on the client and the NetApp. From 1K all the way to 64K.

In most cases 64K was ideal. A handful of cases this was not so, but for a recommended setting, 64K was it.

 

Believe me when I say lots of testing has been done to determine that 64K is almost in every case the best setting.

Look at your own environment. How long have you think you have been using 128K when really it was only 64K (the NetApp is set to this so there is no way to go higher from the client)

 

Anyway, if you want to play, I suspect you would need to change three options, TCP, v3-read and v3-write. If you do change them let us know in this thread how it went.

 

thanks

netappmagic
10,853 Views

Well, in Oracle situation,  wsize and rsize should be set to 128k: https://www.netapp.com/us/media/tr-3633.pdf

 

So, upon Oracle recommendation, we should change the values to 128K on both NetApp and client side. Right?

TMACMD
10,847 Views

Read that TR again. 

no where in there does it even mention using anything other than 65536 for rsize/wsize with NFS. 

 

the 128k would be 131072, which is only mentioned  one time and only for a Solaris LUN mounted using ZFS. 

 

128 is mentioned a number of times for slot tables. Absolutely set that and reboot! In the testing I used to do that made the single most advantageous improvement if it was not set. 

You can try and set the numbers higher. You have guidance how. If to do report back.

 

 I do not think any nfs best practice tr uses 131072 as the rsize/wsize for nfs. 

 

netappmagic
10,687 Views

Yes, you are correct again.

 

what about this one then? https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/configuring-nfs-buffer-size-parameters-for-oracle-database.html#GUID-5F14F816-445B...

 

It indicates 32k is recommended for oracle?

TMACMD
10,679 Views

Notice what it says there:

 

Set the values for the NFS buffer size parameters rsize and wsize to at least 32768.

I do not have access to the Oracle note mentioned.

 

You are also looking at Oracle (8) Grid Infrastructure with Database. That is OLD.

At this point, I think you my be grasping.

 

The NetApp TRs out there clearly state to use 64K as a best practice. Time tested and approved.

If you really really want to try the 128K (131072) then try it in a test environment and see how it goes. When I was playing many years ago, the 128K option was not available. 

netappmagic
10,645 Views

I would like to now ask you about slot tables. According to tr-3633,  both slot tables and max slot tables should be set to 128k, since the default 16k is too small. Obviously, this is only in the case of RedHat 6.3 or below. In 6.3+, the default value is 2, which means it will be dynamically adjusted according  to needs. It is not clear to me, should we still need to increase them to 128k in 6.3+?

TMACMD
10,641 Views

Another time-tested option.

 

Always set to 128

****It it not 128K ->  set to 128!

 

netappmagic
10,546 Views

Thanks for the correction!

 

Are there any official NetApp docs indicated the recommendation of changing the values to 128 in redhat 6.3+ even by default they are already set to dynamic? 

Public