ONTAP Hardware
ONTAP Hardware
Hi All,
I just deploy a AFF C250 for the SAN purpose on one of my custom. But I need some technical advise or view on my current setup as after I setup the bos and present the iSCSI and do the failover testing after the iSCSI LUN already present in the VM, the LUN is missing on the server.
Here my current NetApp C250 setup and current physical connection.
Here is some summary of C250 configuration and also current environment overview.
Customer network overview and connection
Failover Testing observation
Before failover testing by shutdown the node A
After failover testing by shutdown the node A
Based on my current setup and connection and testing, why the LUN mapped in the server is missing although when check in the NetApp System Manager, the volume and LUN is online able to failover without any issue? I might need the view and advise on the physical connectivity between the NetApp C250 and the Cisco switch that might be the real cause of the LUN missing due the redundancy setup. Appreciate any view on this matter. Thanks
Regards,
Zul
Does the VM has multi path configured for both nodes iSCSI IPs?After the failover, Node B is managing Node A's storage, so the LUN (owned by Node A) is accessed via Node B's IP
Hi @Sanaman ,
Thanks for your response. Here for your answer of your queries.
1) Does the VM has multi path configured for both nodes iSCSI IPs.
- Can you guide me on how can i check on the multipath configuration on the VM? As per inform by customer, the multipath is enabled and configured.
2) After the failover, Node B is managing Node A's storage, so the LUN (owned by Node A) is accessed via Node B's IP
- When Node B is managing Node A storage, the details of location which is LUN reside still same in same aggregate on the same node A right? (as i stated in in the summary of failover testing after shutdown Node A)
For the current diagram connectivity between the NetApp C250 and Cisco C6300, the cabling from iSCSI LIF should be no problem right if have ISCSI lif only using one port on each node? Please advise
1. It depends on your VM, if VM is a Windows you can check iSCSI configuration (targets and properties). If it's a Linix/Unix you will need to run multipath commands based on the OS.
2. Node A is down but storage has failed over to Node B. Hence client will access storage in-directly (client request goes to the storage via Node B LIF). When you run "iscsi connection show " you can see how client is connecting to the storage. One iSCSI IP per node is OK but it reduces the redundancy. We have 2 IPs for clients and 2 IPs per node. So client multipath is 1:2 and our "iscsi connection show" command will yield 4 paths per client.