ONTAP Hardware

FAS 2240 direct attached 10 GB to esx

HEADCOACH2001
6,539 Views

Hi there,

i am a newbie in configuration a Netapp. But i have a Problem and i hope some can help me.

We have the following Hardware:

1 Netapp FAS 2240-2 with 24 sas disk and an additional shef with 12 sata disk.

2. Two HP Server as VMWare ESX 5.5

The Netapp and the HP Server had both 10Gb SFP Interface Cards and where direct attached with out a Switch.

We did the following config for Hosts and rc see attachment

And on the esx we but on the two Server the two 10Gb + 2 1GB on 1 Switch. But it is not working i can not see the volumes.

We have created two aggregates one on the SAS Shelf with two volumes and on Aggregate on the SATA Shelf with one volume.

So at the Moment we are not able to Mount the volumes as nfs. So can some help me please or explai me were i have made a mistake.


Thanks in advance

Peter

4 REPLIES 4

CHRISMAKI
6,539 Views

Peter,

Direct attached is not ideal, but it should work. Does the links come up? If not, flip your tx/rx on one end. The next question is how is everything cabled?

*****Section 1*********

For both hosts to see both volumes, you'll need either HP server cabled to both controllers. I'm assuming you have SAS shelf owned by one controller and the SATA by the other, correct? Lets assume that SAS is owned by STOR1A and SATA is owned by STOR1B and is cabled as follows:

stor1a:e1a -> server1:eth0

stor1a:e1b -> server2:eth0

stor1b:e1a -> server1:eth1

stor1b:e1b -> server2:eth1

ESX on server 1 would then have access two two data stores on stor1a via 192.168.2.12 and one datastore on stor1b via 192.168.2.22.

ESX on server 2 would have access to the same data stores, but the two SAS ones would be via 192.168.2.13 and the SATA one would be via 192.168.2.23.

****Section 2******

Not having a switch isn't ideal, but I'm assuming the problem is budgetary and you're working with what you have. Since you don't have switches, I'd be tempted to drop the single mode ifgrp configuration in favour of setting "cf.takeover.on_network_interface_failure" to "on" and then setting the "nfo" option in the ifconfig command for the e1[a,b] interfaces. That way you're always running on 10gig, if you lose a 10gig interface the HA pair will just fail to the side where both interfaces are still good. The problem with this configuration however is that if you reboot an ESX host, you could induce an unexpected failover. You could prevent this by disabling cf before rebooting ESX hosts.

********

Anyway, let me know if section 1 above is of any help. If you still can't see your volumes, paste the output of the command "exportfs" from both nodes in your reply.

HONIG2012
6,258 Views

i assume you're talking about a 7-mode, but what about a clustered one? although it's not a recommended configuration would it be supportet on a cDOT System? especially with NFS datastores?

theoretically i belvieve it would work - having the failover groups correctly configured.

 

any ideas on that?

thanks AJ

A_Tomczak
6,249 Views

Hi,

from my point of view it is a valid configuration and it should work. Is the connection between the servers an the storage established? Are you able to send a vmkping? If yes than the most common problems are: Wrong volume security style (must be unix), wrong or missing exportpolicy (cDot), wrong or missing export in the /etc/exports (7-Mode).  

HONIG2012
6,215 Views

just wanted give an update on that which was send to me by partner helpdesk

 

“We don’t specifically test direct connect, and I know we do not support it in cDOT at all (at least not via Fibre Channel) but we have been supporting it in 7-mode.”

 

so even if it would work there doesn't seems support on this kind of solution.

 

using switches now Smiley Wink

Public