Network and Storage Protocols

How do you verify you can transmit Jumbo frames between two filers?

russ_witt
14,443 Views

How do you test for the MTU between two filers to verify they can pass jumbo frames for SnapVault?

A standard method of determining whether your ethernet communications channel can transmit a particular MTU is to "Ping for the MTU"

The syntax to do this with Windows is shown below. Linux has similar capabilities to set a "do not fragment" bit, but I could find no such option in ONTAP.

What is the test method for ONTAP?


Ping for the MTU
================

Windows Ping Syntax:

ping -f -l 1472 4.2.2.2

-f = do not fragment
-l = set size in bytes

1472 = a common max frame size with DSL
4.2.2.2 a root level domain name server
(example only, use your destination instead)

Use this format of ping to determine the MTU
for the path between your source and destinaton

When I try pinging with large MTUs in ONTAP, the latency increases as shown below, indicating that the ICMP is being fragmented by the intermediate nodes.

Also anyone have any idea why 65456 bytes is the ONTAP limit in this case? 

And yes I know that I do not have Jumbo Frames configured on my 3050 below as I was just trying to get the ping syntax for "do not fragment" and had not set up the intermediate nodes yet.

--- 10.10.5.103 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
3050B> ping -s -v 10.10.5.103 65457
ping: wrote 10.10.5.103 65476 chars, error=Message too long
ping: wrote 10.10.5.103 65476 chars, error=Message too long

--- 10.10.5.103 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
3050B> ping -s -v 10.10.5.103 65456
65472 bytes from 10.10.5.103 (10.10.5.103): icmp_seq=0 ttl=127 time=12.577 ms
65472 bytes from 10.10.5.103 (10.10.5.103): icmp_seq=1 ttl=127 time=12.360 ms
65472 bytes from 10.10.5.103 (10.10.5.103): icmp_seq=2 ttl=127 time=12.223 ms

3050B> ping -s -v 10.10.5.103

64 bytes from 10.10.5.103 (10.10.5.103): icmp_seq=0 ttl=127 time=0.506 ms

64 bytes from 10.10.5.103 (10.10.5.103): icmp_seq=1 ttl=127 time=0.632 ms

64 bytes from 10.10.5.103 (10.10.5.103): icmp_seq=2 ttl=127 time=0.509 ms

3050B> ifconfig -a

e0a: flags=0x2d48867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

        inet 10.10.211.7 netmask-or-prefix 0xffffff00 broadcast 10.10.211.255

        partner inet 10.10.211.6 (not in use)

        ether 00:a0:98:03:71:0a (auto-1000t-fd-up) flowcontrol full

e0b: flags=0xad48867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

        ether 02:a0:98:03:71:0a (auto-1000t-fd-up) flowcontrol full

        trunked jumbo

e0c: flags=0xad48867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

        ether 02:a0:98:03:71:0a (auto-1000t-fd-up) flowcontrol full

        trunked jumbo

e0d: flags=0x2508866<BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

        ether 00:a0:98:03:71:09 (auto-unknown-cfg_down) flowcontrol full

lo: flags=0x1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160

        inet 127.0.0.1 netmask-or-prefix 0xff000000 broadcast 127.0.0.1

        ether 00:00:00:00:00:00 (VIA Provider)

jumbo: flags=0x22d48863<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500

        inet 10.10.211.207 netmask-or-prefix 0xffffff00 broadcast 10.10.211.255

        partner jumbo (not in use)

        ether 02:a0:98:03:71:0a (Enabled virtual interface)

3 REPLIES 3

brentlund
14,299 Views

ever get the correct answer?

IGOR_KRSTIC
14,299 Views

Example: ESXi server, NFS, 9000 bytes MTU

ssh to server... From ESXi ping NFS server X.Y.Z.Q

~ # ping -4 -c 4 -d -s 8972 -v X.Y.Z.Q

PING X.Y.Z.Q (X.Y.Z.Q): 8972 data bytes

8980 bytes from X.Y.Z.Q: icmp_seq=0 ttl=255 time=0.655 ms

8980 bytes from X.Y.Z.Q: icmp_seq=1 ttl=255 time=0.616 ms

8980 bytes from X.Y.Z.Q: icmp_seq=2 ttl=255 time=0.605 ms

8980 bytes from X.Y.Z.Q: icmp_seq=3 ttl=255 time=0.659 ms

--- X.Y.Z.Q ping statistics ---

4 packets transmitted, 4 packets received, 0% packet loss

round-trip min/avg/max = 0.605/0.634/0.659 ms

Then, you could try...

~ # ping -4 -c 4 -d -s 8973 -v X.Y.Z.Q

PING X.Y.Z.Q (X.Y.Z.Q): 8973 data bytes

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

--- X.Y.Z.Q ping statistics ---

4 packets transmitted, 0 packets received, 100% packet loss

*** vmkping failed! (status 4) ***

ping arguments used in this example:

-4           use IPv4
-c <count>   set packet count
-d           set DF bit on IPv4 packets
-s <size>    set the number of ICMP data bytes to be sent.

The default is 56, which translates to a 64 byte

ICMP frame when added to the 8 byte ICMP header.

(Note: these sizes does not include the IP header).
-v           verbose

So...

8972 + 8 (ICMP header) + 20 (IP header) = 9000 => O.K.

8973 + 8 (ICMP header) + 20 (IP header) = 9001 => not O.K.

Hope this helps.

BoredSysAdmin
10,956 Views

IGOR_KRSTIC hasd a good idea, but it doesn't help me as we have a dedicated vlan for storage. Ether I change VMKernel Default Gateway (which I could, but afaid it might change it on mgmt interface)

 

I know the command to test jumbo frames from filer with cDot firmware, but we ar still running 8.2 7-Mode and struggling to find ping with packet size and fragmentation disable flag - Must have both to test it.

 

Any suggestions/ideas?

Public