Thank you for taking the time to read my post. I've been involved in NetApp administration for 2 years and love it. My environment includes NFS, CIFS, and iSCSI, but only 1 implementation of FCP. I'm attempting to learn FCP and MPIO, but need some advice.
The following components make up my lab:
2 - Emulex 9802 cards
1 - Dell 1850 w/Windows Server 2003 sp2
1 - FAS270c
1 - Brocade Silkworm 200E fabric switch
I've read several of the block management docs. I didn't exactly find a single doc that spelled MPIO with FCP end-to-end, but from what I could gather, this was my plan (in order):
1. Prepare filer (cluster, fcp, etc)
2. Install HBAs in the win2k3 server and load the Emulex Config Tool and HBAnywhere
3. Install FCP Host Utilities ver 5.3 and applicable hotfix(es)
4. Install DSM ver 3.4 and applicable hotfix(es)
5. Load Snapdrive ver 6.3 and applicable hotfix(es)
The physical configuration looks like this:
Dell1850 w/2-HBAs plugged into 2 ports on the Silkworm 200E
FAS270c plugged into 2 ports on the Silkworm 200E via 0c on both heads
I understand it's not true MPIO with a single FC switch, but it's all I have for a lab at the moment.
After loading all software, I used snapdrive to create LUNs on the 270. I choose both wwnn's for MPIO. Had SD manage the igroups. In my mind, MPIO should be set up properly.
When checking under the DFM aplette in Computer Management, I see 4 connections to the filer. (2 connections through 0c and 2 connections via the vtic/interconnect). The problem is the state for each disk reads "Active/Unoptimized". To test my disk, I ran an IOMeter test. I tried to hit the disk pretty hard, and during the test, I received the following error from my filer's console:
"Sun Feb 27 14:24:50 CST [TOASTER1: scsitarget.partnerPath.misconfigured:error]: FCP Partner Path Misconfigured - Host I/O access through a non-primary and non-optimal path was detected."
When running a lun stats -o -i 1, the PARTNER READ KB is fairly significant (about 40% of the primary path).
Can someone let me know either what I'm doing wrong, what I'm leaving out, or what I might check to verify if my configuration is proper? I didn't do any configuration via HBAnywhere or elxcfg. Any help would be greatly appreciated. Thanks again for your time.
DSM v3.4 ?
Is that the Microsoft DSM (not the NetApp DSM... ) ?
If using the native MS DSM ensure that Alua is enabled on the igroup.
When using the NetApp DSM leave it off ...
I hope this response has been helpful to you.
At your service,
(P.S. I appreciate points for helpful or correct answers.)
Thank you Eugene for your response. DSM 3.4 is NetApp's DSM MPIO. Far as I understand, Windows Server 2003 requires a 3rd party DSM for MPIO with FCP. I can do MPIO w/iSCSI without any issues.
How would I even go about check if MS DSM is installed?
Stop me if I'm wrong, but from what I've read, FCP on Windows Server 2003 sp2 requires AULA with host utilities and DSM 3.4 with snapdrive 6.2 and later. I can turn it off and reboot, but the installation of ntap's DSM says AULA is required. Does that make sense?
not sure if i mix up something but as far as i know win2k3 comes with support for multipathing plugins, but has no plugin itself, thus you need to install ontap dsm. win2k8 comes with both, support and a plugin, so you can choose to either install the netapp plugin or the microsoft one.
as for win2k3, i always go for multipathing, THEN install host utils, then snapdrive. so maybe you should try a repair install of your host utils as they check for mpio install and set certain timeouts.
That is what i read as well. The only MPIO for 2k3 that MS has is for iSCSI. You need 3rd party for FCP.
I'll try the host util repair and check back. Thanks again.
what fcp mode are you using? you should have system single image. multipath driver cannot differentiate between active and passive path as all are set to unoptimized (your screenshot).
host util repair?
system single image?
dynamic least queue depth load balancing policy in ontap dsm?
if all is yes, then id suggest to get in touch with netapp support as the ontap dsm doesnt properly recognize the active/passive pathes.
I wish I could contact NTAP support but this filer isn't under support. They send me to a customer service rep to tell me that the filer isn't under support and instruct me to contact my sales rep . I really appreciate your help.
The thing I find odd is that w/ Microsoft's iSCSI MPIO, you can configure multiple paths, and DSM shows each path correctly. Disk Management shows only 1 instance of the LUN. With the MS MPIO config, I can actually configure each path. DSM makes the LUN appear as a single disk.
I can't find where to configure FCP MPIO. From what I gather, DSM only shows how paths are presented and lets you change how MPIO uses each path.
Where do you actually go to configure MPIO? Is it all automagic?
Did you reboot the Windows host? As you correctly read, the Data ONTAP DSM 3.4 requires ALUA. And Windows requires a reboot to pick up the ALUA state.
Thanks for the post. I rebooted the windows host several times. I recently uninstalled MPIO, Hosts Utilities, and Snapdrive. I restarted the server, then:
1. Installed MPIO, rebooted
2. Installed Host Utilities, rebooted
3. Installed Snapdrive, rebooted (for good measures)
The interesting thing is I lost my FCP luns when I uninstalled the ntap software. When I reinstalled everything, the LUNs were mapped. Even moreso interesting is MPIO software is showing 2 different things on 2 different LUNs:
1. One LUN I had an igroup with only 1 initiator mapped to a LUN. Ever since the reinstall of the software, it shows only 1 path. Doesn't show the vtic as a path, but still shows Active/unoptimized
2. One LUN I had 2 igroups with a sigle initiator per igroup, mapped to a LUN. Ever since the reinstall of the software, it shows only 1 path per HBA, but no longer shows the vtic as a path. Still Active/unoptimized.
AULA still enabled for both igroups.
* not sure if it matters a ton, but I used a domain service account instead of the system account for MPIO and SnapDrive. That acccount is a local administrator on the host server and on each head of the filer.
It makes sense that your LUNs would disappear if you remove the NetApp DSM software. The DSM software sets a registry key to claim LUNs from NetApp storage. Without a DSM claiming the NetApp LUNs, the Windows MPIO component does not know what to do with them.
In the Wndows system event log, do you see any events written by ontapdsm? Event 61212 would be written if the DSM does not detect ALUA for a LUN.
The DSM installer should have checked, but have you installed all of the Windows hotfixes specified for your configuration in the NetApp Interoperability Matrix Tool?
Thanks Greg - that was very helpful to know. I checked my System event logs and found event ID 61212 from when I was experimenting w/turning off ALUA. I cleared my event logs, rebooted, and no Event ID 61212. The logs look clean, and there doesn't appear to be any issues from the logs.
During my previous post, I uploaded screen shots indicating that I had either 1 or 2 paths (via HBA only b/c I turned off FCP and brought 0c down on the partner head). Even when I had a single LUN with a single path into the Brocade switch, DSM showed I had 1 connection to a LUN. It still registerd Active/Unoptimized. How is that even possible?!?! Could there be a switch issue or something? Far as I understand, you can zone for security/control, but igroups will give you security/control. I do not have any zones enabled, so all devices plugged into my Brocade can see all devices plugged into any ports. The only things plugged in are 2 HBAs from the Dell server, and 0c from each head of the filers.
I'm tempted to get a 2020 before I stage it and see if I can move the LUNs over to that guy and take the 270 out of the equation. That or plug the Dell directly into the 270, one HBA per port and see what happens.
Guys, I realize this post is long, but I really really appreciate all the time and effort you've given into helping me along. I really appreciate it.
PS - My iSCSI MPIO is working perfectly. I have 2 NICs in my Dell, a single VIF with 2 ports on my 270, and an IP alias assigned to the VIF. MPIO shows each path is active and with IOMeter disk tests, I've verified that it's working as I expected.