I finally set up the cluster with 1 node and with adding 2 etra network adapters as instructed by the tutorial video.
When I set up the 2nd node (added 2 extra network adapters as well), and in the end, it prompts me to enter the cluster name, which I did, then it shows me trying to join, and then network setup.. then it loops back to me and ask me if to create or join the cluster again. pls see the attachment. So, i am in the loop.
Are there any solution for that? This is 2nd time. The first time, I thought I did something wrong, so, started over from scratch.
What is the cluster IP and I can find from the existing 1 node cluster?
If I stopped the prompted questinair, at one point, it asked me to enter cluster IP to recognize. What CLUSTER IP i should use? The cluster with the other node is running fine.
Also, based on the video, when I need to change default subnet in the end of adding 2 extra adapters, however, since vmware play doesn't have VM Editor, I skipped this step.
Odds are no, so since you can't change the subnet of the NAT network in player, you have to figure out what subnet its actually configured for and adjust your IP settings accordingly during the cluster setup script.
The "NAT" network is really VMNET8, so check your host machines ipconfig /all for the ip range/netmask/gateway for VMnet8. Use IPs in that range instead of whats called out in the simulator documentation, and you won't need the vm network editor. Just assign nic0 and nic1 to "host-only" and the rest to "nat" and it should all fall into place.
I have already run ipconfig /all, and the subnet on my wondow is 192.168.14.x, so, i used 192.168.14.101 for the cluster mangement IP, 192.168.14.91 for the node1 management ip, as suggested by simulator doc.
however, the IP for e0a and e0b on node1 in the already created cluster is configured by the process as default. the same on node2.
pls find attached the output of "net int show" on node1 in the cluster.
That looks fine, the cluster lifs on e0a/e0b are autoassigned in the 169 network. If the join fails, on sibsequent attempts those ips have already been generated so you can just accept the previous values.
Changing the serial needs to be done during the first power on. Adding nics should be done before the first power on, so it sounds like you've done everything correctly.
You'll likely have problems booting since all the PCI devices now have slot assignments. So after adding the bridge code, when you boot you'll get an error about not enough pci slots. Find the device(s) referenced in the error and change their slot ID to -1 by editing the vmx file. Then you will probably be able to boot.
it seems alright now. I added 2 extra NIC's on node2, and able to bring it and the cluster up.
The problem now is that I could not putty in and also ping the cluster mgmt IP, 192.168.14.101. Please see my screenshot of "net int show" on both node1 and node2. They all show 192.168.14.101 is THE ip for the cluster mgmt. I can ping node mgmt IP's: 192.168.14.91 and .92, as well as .1
The only possible cause I can think of is that the same IP was used for an already removed cluster. However, the old cluster is already deleted, plus, I already did "putty -cleanup", and also removed all hosts from Registry.
Could you please share with me details commands to fix the issue? Otherwise, I am thinking to rebuild.
Based on "Step by Step installation" doc, the gateway should be 192.168.x.1, x --> 14 here when prompting me to enter cluster mgmt IP. So, I am not sure next time when I rbuild should I use .2 or .1, or if it matters.