We currently in the process of migrating physical servers to Hyper-V cluster running on NetApp 7 mode filers. We will be utilizing 10G connections over iSCSI but unfortunately we don't we don't have a switch to connect between the virtual hosts and the controllers.
We have two virtual host nodes in the cluster, and a FAS2240 7 mode dual controller unit. Because the lack of storage we are running acitve-passive HA mode. LUNs and vFilers are all running on the active controller. Each virtual host has two links, one goes to the active controller and the other goes into the passive controller to ensure that all nodes have access to the LUNs in case of failover.
This works fine if I use the Windows built-in iSCSI initiator, I don't have to specify the initiating IP address. So when controller failover occurs all running VMs are not affected, because Windows would use the second IP/link to connect to the passive controller.
Issue surfaces when I implemented SnapManager and SnapDrive. In SnapDrive it seems that I have to specify initiating IP address, but this IP address won't have access to the passive controller when failover occurs.
My question is is there anyway around this? If I don't use SnapManager what other backup/restore solutions I would have for VMs?
in a failover, the surviving node is taking over the IP addresses as well, including the IP address Snapdrive is using to connect to the filer so from the host perspective (including Snapdrive on that host) nothing changed.
Thanks for the reply. Because the lack of a switch the setup we have is virtual host1 is connected to controller A from NIC1 IP: x.x.x.1 on the virtual host to e1A IP: x.x.x.11 on controller A directly. Virtual host1 is also connected to controller B from NIC2 IP: x.x.x.2 on the virtual host to e1A IP: x.x.x.22 on controller B. In SnapDrive initiating IP address has to be selected and it's x.x.x.1 in my case. In the case of a failover, the initiating IP should be switched to x.x.x.2 on NIC2 because it is physically connected to controller B. This happens seamlessly if I use the Windows iSCSI initiator and leave initiator IP address as "Default". However in SnapDrive it forces you to choose an initiator IP address.....
I assume that we are referring to the iSCSI link only and not the link/IP that Snapdrive is using to send commands to the filer (RPC, http/s)
- as you mentioned, Snapdrive is letting you choose an initiator and it also creates a new 'initiator group' on the filer or let's you choose an existing 'initiator group'. During a takeover, all ports/links of the 'down' filer are served by the surviving node. If you had an iSCSI connection to node A, IP 192.168.0.11, node B will serve data from its own ports but as IP 192.168.0.11 and the host won't even notice that it is coming from a different filer (obviously, you'd need a physical connection to both heads). In an HA pair there is the concept of 'single-image' in which the host sees the 2 filer in an HA pair is one unit/entity. If you will check the WWNN on both filers you will see that they are the same. Only the WWPN are different
- in SAN (FCP, iSCSI) you should always use MPIO. Once configured on the host, Snapdrive will let you pick multiple initiators
Thanks again for your reply. Yes we are talking about iSCSI link here.
- I understand what you are saying, but the initiator IP address configured to connect to controller A is not reachable physically on controller B. So when a failover occurs the host cannot get to controller B. Where as in the Windows default iSCSI initiator implementation you don't have to specify an IP address, I've tested this and it works in failover. But now I'm worried that something else is not going to work properly or it's not supported by NetApp anymore by by passing SnapDrive to connect the LUNs.
- There are two NICs on the virtual host. I only have one link to each of the two controllers from the virtual host. Correct me if I'm wrong. I don't think I can use MPIO in my implementation?