I am in the initial stages of deploying NFS datastores in my VMware environment. In keeping with what I consider a good practice, I have set up a dedicated, unrouted IP storage VLAN explicitly for NFS and iSCSI and assigned VMkernel and NetApp ports to it. The NFS portion is working fine, but the Virtual Storage Console is unable to provision storage on our NetApps because it's apparently trying to set up the connection on the standard production network. Is there a way to work around this issue? Will the other features of the VSC work, or am I basically shooting myself in the foot here?
Does ESX hosts' vmkernel interface and the NetApp controller have ip addresses (VLAN) on the same network (in the same subnet)? If not, you may want to do that. If yes, then you can look to exclude the interfaces and IP addresses that you don't want to be used from the resources link within the provisioning and cloning tab. If they can't be excluded then refer the thread https://communities.netapp.com/thread/20885
Yes, the VMKernel interfaces are on the same subnet. The problem appears to be that the VSC itself cannot communicate with the filers using that subnet because the vCenter server does not have access to it. I've opened a ticket with NetApp support to see if this issue can be addressed.
So the VSC needs some means to talk to the NetApp controller. Does the controller have access to a MGMT network? Regardless VSC needs access to manage the controller. It will however try to create the NFS and ISCSI datastores on the same VLAN as the ESX hosts vKernel ports.
A bit more advanced, you can even isolate the NFS from the ISCSI if you want by editing the VSC Prefs xml file in the install directory and indicating what subnet you want the NFS on and which you want the ISCSI on (if you want them separated...)
I was able to get the VSC to talk to the filers by editing the XML file and forcing it to use the NFS network. Provisioning and Cloning seems to be working, but the Backup and Restore functionality is not.