Lots of good questions.
The FCP adapters don't show in the network port show output. Try "fcp adapter show" instead.
There are internal connections between the nodes in that chassis, but they are used for the HA Interconnect, not the cluster network. Using redundant external 10gb ports for the cluster network is consistent accross all of the platforms. It enables a cluster to scale nondisruptively by adding additional HA pairs.
To be in a supported config, you need the mezz cards. Whoever had it previously was running cluster mode, so they probably ignored the supported topologies and used two 1gbe ports for the cluster network at the cli.
There is more to wipeing the nodes than runninig option 4. As you've noticed some of the config is preserved elsewhere. You also need to run a wipeconfig. See this KB:
https://kb.netapp.com/support/index?page=content&id=1014631&actp=search&viewlocale=en_US&searchid=1472570906636
The HA errors you are seeing are probably transient during boot. Once both nodes are joined to the cluster you should be able to enable HA, or troubleshoot the interconnect.
You said earlier you have the wrench ports cross connected. There are two types:
The "locked wrench port": connects internally to the e0P port (Private network), also called the "ACP" port. It is used as an "Alternate Control Path" when external disk shelves are connected. If you look closely, it has a padlock in the middle of the icon by the port. When there are no external shelves those are cross connected between the nodes to close that loop.
The "wrench port" is a shared management port used by the onboard e0M interface (Managment network) and the internal Service Processor (SP). This port should connected to either your management network or your data network if you don't used a seperate management network. On the 2240, its a 10/100 port.
Note that when you see SP in a NetApp context, it is referring to the out-of-band Service Processor on the node. Another vendor uses that acronym to refer to the Storage Processor, which we call a Node. Different vocabulary, overloaded acronyms.
Bye the way, which version of ONTAP is it running? It should post the version early in the boot process or you can run 'version' at the cluster shell command line.
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.