ESXi host directly attached to 2240 via 10Gb port somehow messing up network connectivity?

Hello all,

I ran into a network connectivity issue while configuring 2240 HA pair with 10GbE mezzanine cards.

Namely, once it's initial configuration was finished and it's basic (1Gb) network parameters set up, both controllers were accessible to servers (pinging in & out, lun mappings, cifs shares), as well as the the old storage system, replication between all of them was performing well, DNS servers were visible, all was well.

We than directly attached 2 ESXi hosts via 10Gb ethernet to both controllers and enabled their 10Gb ethernet ports (e1a, e1b). What happened next was that both controllers instantly lost their 1Gb network connectivity - couldn't ping gateway, DNS servers, replication was broken, can't even ping eachother - while on the other hand, the ESXi servers can see their LUNs (datastores) without a problem, that bit seems to be working fine. As expected, as soon as 10Gb ethernet ports were disabled, the network connectivity returned?

It's worth mentioning that both 1Gb and 10Gb ports draw IP addresses from the same subnet, don't know whether this is an issue... Did I miss something?



Re: ESXi host directly attached to 2240 via 10Gb port somehow messing up network connectivity?

Apparently, the default ethernet port was e1a to which ESX host is directly attached, therefore the controller could not see the network.

The resolution was to create separate IP subnets for each ESX<->NetApp direct 10Gb connection, making once again e0a the default network interface. That also meant that during controller failover, there may be no surviving 10Gb connection so another path from the regular 1Gb subnet should be added on ESX host (Configuration, Storage Adapters, iSCSI Software Adapters -> Dynamic Discovery, Add) + port binding. In the end all paths were visible through Configuration, Storage, right click on Datastore, Properties, Manage paths.