I have inherited a situation where all storage traffic is on VLAN10. I am rolling out VMware SRM, as part of this project I am rebuilding our hosts to be ESXi 5.1 with host profiles and VDS. Each host will have 4vnics dedicated to storage; I have vmdk's on both NFS and ISCSI.
Where I am uncertain is on their configuration. I know that best practice for just ISCSI is to create vmk's with assigned IP's with a single adapter each and the others set to unused. I understand that NFS will run across the VMK's using the IP range of the NFS storage I am connecting too. So should I create:
One vswitch with 4 vmk's single IP each and assign them all to the ISCSI software adaptor?
Two vSwitches with one set with two vmk's for ISCSI assigned to the ISCSI software adaptor, and the other vswitch setup for NFS? If I were to do this how would NFS traffic know to use this other vswitch?
Or a single vswitch with two adaptors assigned to the ISCSI adaptor and two just left unassigned?
I've tried googling this and it's been to specific a case to find an article on.
I know the best thing to do would be to create another VLAN so I can separate ISCSI from NFS and create separate vswitches. However we are getting new filers in January and I am trying to put off major network changes until then. So I am just looking for an interim solution.
Those are both valid options. Do you use the VSC for datastore provisioning? In the config file for VSC you can configure subnets to use for ISCSI and NFS. You could use a single VLAN but different subnets for the 2 protocols allowing 2 different VMkernel IPs and thus isolate the 2 protocols that way. Just a thought...