SolidFire and HCI

How to fix the Jumboframe problem when I am deploying on H410C NDE for 2+2 node

Timous
3,729 Views

when I use jumboframe on Switch 1 that directly  connect to the compute node,
but it shows the jumbo frame problem, 

Whats the problem about it?

my simple setting show as below.

Thank you!

 

WeChat 圖片_20220220104714.png

WeChat 圖片_20220220113732.png

1 ACCEPTED SOLUTION

elementx
3,667 Views

You didn't note what your storage node's IP address is. NDE runs on a storage node.

 

My first reply contains a suggested workaround which is to set IPs manually. The second to login via SSH and see how ping with jumbo frames are traveling (L2 or via Mgmt Gateway) and that's also recommended in the KB.

 

https://kb.netapp.com/Advice_and_Troubleshooting/Hybrid_Cloud_Infrastructure/NetApp_HCI/NDE_version_1.6_fails_during_pre-tests_on_Jumbo_frames

 

There are other workarounds in KB, such as the one related to routing and network separation between iSCSI and Mgmt.

 

https://kb.netapp.com/Advice_and_Troubleshooting/Hybrid_Cloud_Infrastructure/NetApp_HCI/NDE_fails_network_validation_when_management_and_iSCSI_network...

 

View solution in original post

5 REPLIES 5

elementx
3,711 Views

iSCSI client and server should be able to communicate on the same VLAN. It's not clear from your post if your compute network allows VLAN 112 and if those interfaces came up with MTU 9000.

 

If the NetApp Deployment Engine fails because your network does not support jumbo frames, you can perform one of the following workarounds:

  • Use a static IP address and manually set a maximum transmission unit (MTU) of 9000 bytes on the Bond10G network.

  • Configure the Dynamic Host Configuration Protocol to advertise an interface MTU of 9000 bytes on the Bond10G network.

From: https://docs.netapp.com/us-en/hci18/docs/hci_prereqs_network_configuration.html#network-configuration-and-cabling-options

 

You can monitor port traffic on the iSCSI ports of storage nodes to see if there's traffic incoming from compute nodes' iSCSI interfaces and if VLAN and MTU is correct.

Timous
3,706 Views

the switch 1 to ports connet  to compute node have trunk another vlan, but another two ports that connect to storage port only access vlan 112.

 

show you more about the information when I am deploying NDE that show log as below

HCI-T00022-JUMBO-PING-10.1.10.11:0 | fabrixtools:1062 | WARNING | Failed to Jumbo Ping 10.1.10.1: with errors: 10.1.10.11 failed to respond to a jumbo frame ICMP request - exhausted all retries

elementx
3,705 Views

So the ping with MTU 9000 can't go through.

 

Maybe VLANs of source and destination are different, maybe jumbo packets can't pass through maybe one of the interfaces isn't up, etc.

NDE cannot know why it doesn't work, so you need to do perform network troubleshooting and validation and retry.

Timous
3,673 Views

*correct the underline content as below
show you more about the information when I am deploying NDE that show log as below

HCI-T00022-JUMBO-PING-10.1.10.11:0 | fabrixtools:1062 | WARNING | Failed to Jumbo Ping 10.1.10.11: with errors: 10.1.10.11 failed to respond to a jumbo frame ICMP request - exhausted all retries

elementx
3,668 Views

You didn't note what your storage node's IP address is. NDE runs on a storage node.

 

My first reply contains a suggested workaround which is to set IPs manually. The second to login via SSH and see how ping with jumbo frames are traveling (L2 or via Mgmt Gateway) and that's also recommended in the KB.

 

https://kb.netapp.com/Advice_and_Troubleshooting/Hybrid_Cloud_Infrastructure/NetApp_HCI/NDE_version_1.6_fails_during_pre-tests_on_Jumbo_frames

 

There are other workarounds in KB, such as the one related to routing and network separation between iSCSI and Mgmt.

 

https://kb.netapp.com/Advice_and_Troubleshooting/Hybrid_Cloud_Infrastructure/NetApp_HCI/NDE_fails_network_validation_when_management_and_iSCSI_network...

 

Public