We're currently creating a Solaris 10 VM to take over for an existing, failing Oracle server. This server holds a 300+GB database on an iSCSI LUN, currently attached to by a physical server. My biggest concern is getting enough throughput back to the NetApp from VMWare, and my lack of knowledge in all 3 areas is kind of biting me in the rear right now. (I'm mostly a switching/windows/DOS, yes, DOS guy) The current server has an LACP trunk through 4 1Gb NIC's to an HP switch, a trunk of 4 1 Gb NIC's to each ESXi server, and an LACP trunk of 4 1Gb NIC's to each NetApp controller. My question is basically this: In the VM version, I currently have 2 VMXNET 3 adapters installed. Will one VMXNET 3 adapter handle the amount of traffic that the 4 trunked NIC's can do? Or do I need to somehow create 4 NIC's on the VM as well and trunk them together? Thanks in advance for any help you can provide.