ONTAP Discussions

How to configure iSCSI using cross over cables FAS2040 - Windows

VASIMOLA1
9,000 Views

Hi,

We have FAS2040 filer, and I'm having some problems configuring the iSCSI connections. We have NFS and iSCSI licenses and they are enabled.

The host device is HP DL380 G7 running Windows 2003 SP2 Standard. The network cards are HP NC382i DP Multifunction Gigabit Server Adapters.

I have connected two crossover cables from the filer e0c and e0d to the DLO380's network ports #3 and #4.

From the filer side I have made the following configurations:

1. I created a standard LUN  for  Windows

- /vol/VLUN/lun1

- Protocol: Windows

- Size 7.8TB

- Lun Share: none

- LUN Online

2. I created an iSCSI virtual interface from ports e0c and e0d.

- I gave the interface an IP address, mask, MTU size 1500, Unchecked Trusted and WINS, and set Trunk Mode to Multiple

- I enabled iSCSI for this virtual interface.

3. I created an Initiator Group and Mapped it to the LUN created in step 1.

- I checked from the Windows host machine the Initiator Node Name (iqn address) and used it as the initiator in the creation of the iGroup

- Type: iSCSI, OS: Windows,

- I mapped the newly created iGROUP to LUN created in step 1.

If I have understood correctly, everything should be more or less ready for the utilization of the iSCSI LUN from the filer point of view. If I check the iSCSI Report I can see the iSCSI service is running, iSCSI NodeName is shown, iSCSI Portal Show shows the IP address, port, TPGroup and Interface, and iSCSI Statistics show no problem.

In the Windows host machine:

- I have installed the Microsoft iSCSI Software Initiator.

- I used HP Network Configuration to create a Network Team from the ports #3 and #4 and enabled these network cards as iSCSI Devices.

- I gave the IP address for this interface from the same subnet as the virtual interface I created on the netapp device at step 2, and used the same mask.

So the crossover cables are connected from the netapp devices iSCSI interface (e0c + e0d) to Host machines iSCSI enabled network team (ports #3 + #4).

The actual problem:

When I try to establish the connection using Microsoft iSCSI Initiator (Discovery tab -> Target Portals: Add -> IP address of the Netapp iSCSI interface, port 3260) it says

"Connection Failed"

If I try to ping the IP address of the iSCSI interface from Windows I get no reply.

I also tried this using only one crossover cable, and without creating a network team of the HP network cards and had no success.

So I don't know is the problem with the configuration on my FAS2040 filer, or in the Windows 2003 side, or in the Physical Hardware (HP DL380 G7 with HP NC382i DP Multifunction Gigabit Server Adapters).

Any advice would be highly appreciated.

Thanks in advance,

Valtteri

1 ACCEPTED SOLUTION

neto
9,002 Views

Hi All,

This is neto from Brazil

How are you?

Network Teaming is not supported by Microsoft Initiator iSCSI. There are other methods (MCS - multiple connections per session and MPIO) that will attend the performance and high availability requirements.

There is a very good doc on Microsoft site about iSCSI how to and there you have all examples about it.

Please let me know if you need any help

All the best

neto

NetApp - I love this company!

View solution in original post

6 REPLIES 6

neto
9,003 Views

Hi All,

This is neto from Brazil

How are you?

Network Teaming is not supported by Microsoft Initiator iSCSI. There are other methods (MCS - multiple connections per session and MPIO) that will attend the performance and high availability requirements.

There is a very good doc on Microsoft site about iSCSI how to and there you have all examples about it.

Please let me know if you need any help

All the best

neto

NetApp - I love this company!

chriskranz
9,002 Views

Network teaming (supported or otherwise) requires switch assistance, and so specifically will not work if you use cross-over cables. Break the teaming and direct connect both interfaces. Give all interface different IP's (preferrably on different subnets) and then use host based MPIO for load balancing and redundancy.

There's also technicalities that interface teaming actually gives you little benefit with single host connectivity anyway. Use failover teaming if you really must, but as stated, not technically supported.

VASIMOLA1
9,002 Views

Thanks for help guys.

I broke the teaming and virtual interface and got the system up and running. Multipath + round robin is used.

Additional question:

- I'm hoping to see only one partition in Windows (2003 SP2 Standard x64 ).

- If I create only one 7TB LUN, Windows is only able to initialize 2TB even though it sees the whole LUN.

- So I have created 4 LUN's: 2TB + 2TB + 2TB + 1TB (total of 7TB)

Some documentation states that Dynamic Disks aren't supported when Microsoft iSCSI Software Initiators are used.

Still the system let's you create Dynamic Disks, I extended the disks together to get a single 7TB partition, and when I did read/write testing, I saw no problems.

Do you know or believe that it is safe to use Dynamic Disks?

Or would there be any other way to get the Windows to see only one disk with a realiable configuration?

Thanks in advance!

peter_lehmann
9,002 Views

Sounds like you have created "MBR" LUNs, if you create "GPT" LUNs you can use the 7TB Disk size.

I would not use Dynamic Disks at all, because I do not trust Windows to handle this kind of disks and keep the data safe.

Peter

VASIMOLA1
9,002 Views

Thanks for the fast reply.

That works, and you are right, but if I have understood correctly, huge LUNs aren't something you should do.

So the options are

1. smaller LUNs + dynamic disks

2. one huge LUN using GPT

Which one of the options is more reliable and/or has better performance?

Thanks,

Valtteri

peter_lehmann
9,002 Views

Because I had very bad expriences in the past with Dynamic Disks, I'll always go for option 2. Just make sure to also select the matching backup for the "large LUN" that you can fulfill the RPO/RTO needs (SnapVault / SnapMirror).

Peter

Public