Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
im trying to understand whats is going on on my infrastructure
i have a AFF200 with 960gb(10) and linked on tengigabit ethernet for the nfs lif
when im run a test i got this value and is very hard to pass over 230 300mbps over a tengigabit
1 (f=0): [f(1)][100.0%][r=142MiB/s,w=0KiB/s][r=36.4k,w=0 IOPS][eta 00m:00s]
i check the network throughput on the filler side and is not reaching 10gbit
Iperf3 between netapp and node show 9.5gbit
notes: using ubuntu 18.04
mount options:rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.10.10.253,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.10.10.253
the real symtom is some vms start getting high "Waiting for I/O" and when i go to the node and trying to perform ls to the mountpoint the computer freeze arround 3 4 5 seconds to perform a list of 60 files
something that i do is create another lif with a different ip/port and remount to another mount point . when the issue happend on the normal mount point if i try to list to the second mount point works great
what i can check in this case?
thanks
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It sounds like the only difference between the good and bad setups are the physical path the connection traverses. I would recommend looking at the ifstat counters on the two different ports. Packet loss can have a profound impact on TCP performance due to congestion control algorithms. A packet trace collected against the bad port would be allow you to rule in/out the network.
NFS : Networking : FlexCache®
