I am new here so I am quite lost in this overwhelming world.
I want to install two nodes of the NetApp simulator on different physical hosts (so to be able to work with a remote colleague).
By following the documentation, I created a bridged vmnet0 and three custom extra networks (vmnet1, vmnet2, vmnet3 respectively). Those three have each their networking class (172.16.180.0/24 for one of those, and so on).
Now, by trial and error I was able to assign two management interfaces (why there are two? They apparently show the same ports and behavior, at least if I ssh into them), cluster_mgmt and mgmt1. Those are bound respectively to port e0c (that 'leans' onto vmnet2) and e0d (vmnet3).
That leads me to think that the remaining ports, that were assigned to the cluster by the setup wizard (e0a and e0b) must be one on the bridged vmnet0 interface, and the other on the custom vmnet1 one.
What should I do with those?
Since my purpose is being able to be reached by the outside world, how can I change the address of the bridged port?
The two cluster network ports are use for the back-end cluster networking between nodes. Most clusters have at least two nodes but single node configurations are supported.
The mgmt1 interface is specific to that node. Each node will have its own mgmt1 interface. These are used when you must manage a particular node but for whatever reason can't do it through the cluster management interface. Cluster_mgmt is a cluster wide mgmt interface. It can move to any node within the cluster. Generally, this is the interface you would use for cli and gui access to the cluster.
Are you trying to build 2 single node clusters, or are you trying to build a single 2 node cluster?
to run the two nodes on different physical hosts the e0a/e0b interfaces would need to be bridged, and those hosts would need to be sitting in the same L2 domain. The IPs are going to be in the autoassign 169. range and that traffic cant route.
The e0c/e0d management/data interfaces would also need to be bridged, but given valid IPs on that L2 subnet so you can managge the cluster and pass it data.
It a bit easier to do on ESX and split the traffic out on different vlans, but should work on workstation if you bridge all the adapters and have the hosts hardwired into the same network.