VMware Solutions Discussions
VMware Solutions Discussions
Hi,
I'm looking for some advice on how I can add iSCSI to my current environment. I thought this was going to be easy however it would appear not.
Here's what I have so far.
--------------------------------------------------------------------------------------------
5x physical hosts, each with...
Vmware standard (so no vDS, LACP or load balancing based on utilization)
2x nics for guest connectivity (irrelevant)
2x nics for vmkernel traffic.(Load balanced based on IP Hash, We have another datastore's which I'm not going to mention that effectively makes use of the hash)
2x Cisco stacked 3750's set to etherchannel the above pairs of connections.
Netapp 2220 4x nics to form 1x VIF, 1xIP address, Etherchannelled together via the 3750's, Vlan tagging performed at the port level on the switch.
All the Hosts mount the above netapp as an NFS datastore
--------------------------------------------------------------------------------------------
The Problem
I have the licences for snapdrive and the snap manager products so I need to move towards an iSCSI based target to effectively utilise these products with exchange and MSSQL
The luns need to come from the same Netapp filer as the NFS mounts.
I don't want to connect the luns directly from the windows machine as this would effectively then be transporting storage traffic over the guest connectivity adapters
What options do I have?
I believe I can connect the luns to vmware and then pass them through to the windows machines and snapdrive is OK with this?
Subject to question above being yes........ how can I connect the luns to vmware? I believe I have the following roadblocks.
1. My current vmware kernel arrangement has two nics load balanced via etherchannel, is it correct that I cannot bind iSCSI traffic over this interface?
2. Because the nfs and iSCSI datastores are on the same subnet I believe I cannot create another vSwitch and add another kernel port as this would be on the same subnet as my other kernels, Although I can bind my iSCSI traffic down this link I have no way of controlling nfs traffic.
Im a one man team and I need some outside perspective on this issue.
Thanks
Matt Gosnell
you could add additional nics to your vm and attach them to the vswitch with the vmkernel traffic?! usualy iscsi from within virtual machine is least hassle, otherwise you would do iscsi rdms.
Thanks Thomas, That's not a bad idea At the moment my guest subnet it the same as my storage subnet but I'm about to separate them so this should be possible. Would I have to make sure that each subnet was non routable to the other for this setup to work or will windows just use the lowest cost path to the destination? I have other physical servers which need to breach both these vlans so isolating them is not possible.
Up until now I've been using an NFS datastore and one of the great things about this is the ability to use storage vMotion. When I move to Luns does it matter which way I implement them, in every scenario am I going to loose storage vMotion ability?
(apologies for the direct vmware question in a netapp forum)
regardless of NFS, iSCSI, FC or FCoE, you can always storage vMotion
As far as i know and showed in TR-4003 it is possible to use SMSQL using VMDK disks into a NFS Volume - Just take a look at the matrix.
Talking about SME, is requires LUN, but you can use VMDK. If you are planning to use Single Restore, read the best practices for that first.
You can use VMotion and StorageDRS for both LUNs and NFS volumes.
Regards,
VMDK is supported with SMSQL only. SME is NOT supported and will give an VSS error if you try to.