Assigning IP to the new nics should be done withing ONTAP, either from system manager or the CLI. Assigning the nics to a virtual network should be doable from within player, or by editing the VMX directly. Managing the virtual network may be a probelm. They've apparently removed the virtual network editor in recent versions of player. There may be a workaround but it involves harvesting some components from a vmware workstation install: https://communities.vmware.com/message/2290748
... View more
The 2nd cluster will need 3 additional management IPs. Assuming it's connected the the same virtual network as the first, Just pick additional unused addresses on that subnet.
... View more
Just like the first one. Only one cluster base key is published so they'll have the same cluster serial, but that shouldn't impact any functonality. When you build node 1 pick create, and use a different cluster name. When you build node 2, pick join, and make sure you type in the name of the 2nd cluster.
... View more
The differences are subtle: -They have different default serial numbers in the loader -The ESX version has no serial ports -The NICs in the ESX version are all bridged and have networkName populated Were you expecting to see something different?
... View more
Neither of those links are specific to your version of ONTAP. The 2013 link is for ONTAP 8.2, the 2010 link os for ONTAP 8.0. If you check the docs for 8.1.4: https://library.netapp.com/ecmdocs/ECMP1136382/html/html/210-05627/GUID-09E9890C-EFD4-47EA-9539-A658FD251C93.html You will see this note: Beginning in Data ONTAP 8.1.1, you can move a volume from a 32-bit aggregate to a 64-bit aggregate. However, you cannot move a volume from a 64-bit aggregate to a 32-bit aggregate. Hope that helps.
... View more
I've seen that symptom in the 8.2.1 7mode sim, and the fix in that instance is options httpd.admin.enable on and options httpd.admin.ssl.enable on. Haven't seen it in CDOT, but it sounds similar. Check options at the cluster shell. httpd.admin.enable should be on. Also check these: system services web show system services web node show vserver services web show Also check the IP settings of the host interface on VMnet8. Usually .1 and .2 go to the host and vmnet gateway service. It could simply be an IP conflict on that subnet.
... View more
Posts are searcheable, but when I look at the simulator discussions page I see posts from the last couple of days, then posts from 2013, and nothing in between. http://community.netapp.com/t5/Simulator-Discussions/bd-p/simulator-discussions
... View more
neat. is that in fusion? Best guess is fusion cant cope with seeing duplicate macs. I know single mode vifs work on esx. I have one configured that way now. In this example from a sim on ESX, I have e0c & e0d in vif1. Both ports now have the virtual mac of the vif: vsim101> ifconfig -a e0a: flags=0xe48867<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.126.101 netmask 0xffffff00 broadcast 192.168.126.255 partner e0a (not in use) ether 00:50:56:91:cc:65 (auto-1000t-fd-up) flowcontrol full e0b: flags=0xe08866<BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:50:56:91:d1:ea (auto-1000t-fd-up) flowcontrol full e0c: flags=0x8e48867<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 02:50:56:91:cc:65 (auto-1000t-fd-up) flowcontrol full trunked vif1 e0d: flags=0x8e08867<BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 02:50:56:91:cc:65 (auto-1000t-fd-up) flowcontrol full trunked vif1 e0e: flags=0x4e48867<UP,BROADCAST,RUNNING,MULTICAST,NOWINS> mtu 1500 inet 192.168.122.101 netmask 0xffffff00 broadcast 192.168.122.255 partner e0e (not in use) ether 00:50:56:91:83:69 (auto-1000t-fd-up) flowcontrol full e0f: flags=0xe48867<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.122.103 netmask 0xffffff00 broadcast 192.168.122.255 partner e0f (not in use) ether 00:50:56:91:c7:73 (auto-1000t-fd-up) flowcontrol full e0g: flags=0xe48867<UP,BROADCAST,RUNNING,MULTICAST,ACP_PORT> mtu 1500 PRIVATE inet 192.168.2.237 netmask 0xfffffc00 broadcast 192.168.3.255 noddns ether 00:50:56:91:b2:01 (auto-1000t-fd-up) flowcontrol full e0h: flags=0xe00864<RUNNING> mtu 1500 PRIVATE ether 00:50:56:0e:50:9a (auto-1000t-fd-up) flowcontrol full lo: flags=0x1b48049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 9188 inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1 losk: flags=0x40a400c9<UP,LOOPBACK,RUNNING> mtu 9188 inet 127.0.20.1 netmask 0xff000000 broadcast 127.0.20.1 vif1: flags=0x20e48863<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.121.11 netmask 0xffffff00 broadcast 192.168.121.255 partner vif1 (not in use) ether 02:50:56:91:cc:65 (Enabled interface groups) vsim101>
... View more
Not sure what you're trying to accomplish, but ls has very limited utility. As you found, its available in the node shell, but all you can really do with it is ls /etc.
... View more
Yes, before first boot I just delete IDE1:1, create a new blank virtual disk on ide1:1 (careful not to make a scsi disk). Thin provisioned is usually fine. There were a couple of builds where it didn't work (8.2.1rc?) but usually that's all there is to it. option 4 zeros all disks and creates a root aggregate. 44/4a zeros all disks if required and creates a root aggregate. Since vsim_makedisks marks all the sim disks as prezeroed it saves you a wall of dots on a new sim install. Note its not particularly useful in the real world, because ontap is factory installed if it goes through manufacturing, and new shelves are shipping in a nonzeroed state if it comes from distribution.
... View more
Glad you finally got it. 1. I typically just delete the sim vmdk that comes with the simulator. In most cases, a blank vmdk in IDE1:1 will be partitioned and formatted during that initial boot. I do have a test host where I just leave multiextent enabled, along with some other VSA related optimizations, but a blank vmdk is my preferred way of dealing with it. Most of my sim scenarios don't need 250gb anyway and this lets me use a smaller disk where appropriate. 2. Here are the places the sim stores persistent data after initial boot: On IDE0:0 (virtual cf card): loader environment On IDE0:1 (/var): the var file system On IDE1:0 (VNVRAM/misc). nvram, swap, core On IDE1:1 (/sim). sim disks, sim tapes, lock files You could scrub it and reuse a sim, but its really not worth the effort. It would be something like boot menu:systemshell: rm everything under /sim boot menu: wipeconfig (to clear /var) boot cycle once for the wipeconfig loader prompt: setenv a bunch of stuff back to defaults (no set-defaults in sim loader) Then boot and run option 4 If I'm running a lot of scenarios on a particular build I'll make a custom ovf that has autoboot off and everything else virgin so I can crank out sim instances without all the hassles. I'm also a big fan of 44/4a in the simulator.
... View more
You installed the 7mode vsim, so that's the expected behavior. If you want to create a cluster, use the cluster mode vsim. (file name ends in -Cm)
... View more
If its mailbox related, try destroying the mailboxes. Reboot, Ctrl-C for boot menu option 5-maintenance mode boot mailbox destroy local mailbox destroy partner mailbox destroy all halt hit all the nodes then power them back up.
... View more
Looks like your vmdk is still in 2gb sparse format. Support for that format was dropped many ESX versions ago, but NetApp still ships the sim in the old format. SSH to your esx host and run this command, then try again: vmkload_mod multiextent See this KB: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2036572
... View more
According to Oracle: If I use 10 GbE instead of InfiniBand, will I get the same performance? When using a single 10 GbE link, the system will run at link speed, which is 1GB/sec. When aggregating multiple 10 GbE links together, you can run at drive speed on the appliance, which will match InfiniBand throughput. Source: http://www.oracle.com/us/products/servers-storage/storage/nas/zfs-backup-appliance/zfs-backup-appliance-faq-1579094.pdf
... View more
Depending on the host config a jumbo MTU may not work. But you should see MTU related alerts in that case. vSims default to a 1500MTU for that reason. If you're in the brewery the hosts are probably configured to handle the jumbo frames.
... View more
Troubleshooting cluster join is interesting. Cluster join can fail for a bunch of reasons, most of them problems on the cluster network. All cluster lifs must be able to communicate with all other cluster lifs, MTUs must match and packets can't be fragmented by the switch. In the sim, use MTU 1500 and put all the cluster network ports on the same isolated network. Check the vswitch/virtual network setup. Cluster ports should be on the cluster network (which you may need to create) or on the host-only net if you run them on workstation. e0a/e0b are default cluster network ports on the sims. During the failed attempt it should have auto assigned IPs to the cluster lifs. On the node that failed to join, can you ping the cluster lifs on the working node? Were there any other errors or warnings during the join?
... View more