VMware Solutions Discussions

Target or iniator ?

l_augeard
16,569 Views

Hello

I configure my first netapp FAS2020 active/active, what is target or initiator, by default 0a and 0b are initiator, but is not reconized by server, if I change 0a to target is ok, but is it good ?

1 ACCEPTED SOLUTION

adamfox
16,562 Views

Initiators are used for disk shelves connected to the controller or tape drives/libraries connected to the controller for backup.

Targets are for SAN connectivity to hosts.

An FC port can either be an initiator or a target, not both.

Hope this helps.

-- Adam Fox

View solution in original post

21 REPLIES 21

adamfox
16,563 Views

Initiators are used for disk shelves connected to the controller or tape drives/libraries connected to the controller for backup.

Targets are for SAN connectivity to hosts.

An FC port can either be an initiator or a target, not both.

Hope this helps.

-- Adam Fox

lwei
15,309 Views

If you use 0a and 0b to connect to host HBAs, you should set 0a and 0b as target. The host HBA ports are initiators.   -Wei

jayadratha
15,779 Views

Hello. With 2020 system you don't have choice.

Because there is only 2 fc port and you cannot install addtional fc hba adapter.

So one port configre for disks/library - initiator, another port for hosts - target.

p.s. with additional shelfs to controller and for library you need initiator port.

For hosts access only targets.

ogra
15,779 Views

Yes, you only have 1 cable going to Disk shelf and only 1 cable from server. That is why I always recommend customer to buy FAS2020 in Cluster and configure.

This atleast helps to have no SPOF.

-Bakshana

l_augeard
15,779 Views

You say in my FAS2020 it possible to use only one port ? i can't use 2 FC for 2 server ?

adamfox
15,779 Views

You can use both ports for front-end FCP servers, but you will not be able to add any expansion shelves to your 2020 so you will be forever stuck with the 12 internal disks.

There are only 2 FC ports on the 2020 (as it's an entry level controller).  Of course if you want to expand later, you could get an FC switch, then go back to using 1 port for FCP and one port for disk expansion.

Hope this helps.

l_augeard
15,779 Views

I have 2  ESX servers, I plug the 1 to node 1 and the second at node 2.


LUNs are configured on the node 1,  why the server connect to node 2 see the LUNs?

adamfox
15,779 Views

If the LUNs are configured on node 1, and the ESX server getting those LUNs is connected to node 2, it will work as the partner can get access to the LUNs on node 1 through the cluster interconnect.

But the performance will be less since you are introducing an extra hop.  Whether you notice the difference or not, I can't tell.  But that is one way around the limit.  You may be better off matching LUNs to the controller accessing those LUNs.

l_augeard
15,779 Views

Yes but it's just for tolérance, because i haven't switch FC...

adamfox
11,315 Views

For true fault tolerance without a switch you would want a connection from each server to each controller.  But that will use all of your FC ports. 

As I understand your configuration there are still some single points of failure, but you've lowered the number of them.

l_augeard
11,315 Views

Yes my ESX have 1 port FC, Thus I Have connected each to a différent node

When I have a switch,i connect correctly

l_augeard
11,315 Views

I have this message autosupport

SW VERSION:
7.3.3

Case number 2001387700 has been assigned to this AutoSupport.  You can review or update this case anytime at:
This AutoSupport has been generated because a lun mapping or igroup type misconfiguration has been detected. To troubleshoot this issue, issue a 'lun config_check' command. This command will list out the problems that generated this AutoSupport. The primary cause of this AutoSupport will be changes to the lun mapping configuration while the cluster interconnect is down and the safety mechanism have been overridden by the user using the various '-f' tags on commands like 'lun online','lun map' or 'igroup add'. The errors will typically include cases of having a lun on each filer mapped to the same logical unit number for the same initiator. In 'single_image' mode, you can have only one LUN N for each initiator across the cluster. This is enforced when the cluster interconnect is up. If the IC is down, lun map changes are prevented, unless you use the -f option. However, the -f option should be used with caution.
it's ok ? it's but a connect to the node 2 ? what I do ?
Thanks

adamfox
11,315 Views

That's looks like the warning ONTAP gives when it detects that you are going down the non-optimized path through the interconnect.  So you said the host connected to node 2 was accessing LUNs on node 1.

That will cause the ASUP to be generated to warn you about it.  In your case, it's expected so I would have the GSC archive (i.e. close) the case.

l_augeard
11,315 Views

I have not  understood this passage : "hat will cause the ASUP to be generated to warn you about it.  In your  case, it's expected so I would have the GSC archive (i.e. close) the  case"

you advise me to put the second link on node 1?

adamfox
11,316 Views

Only if you are unhappy with the performance you are getting on the server attached to node 2. 

Of course the better answer down the line will be to get a switch, but until then, if you are ok with the performance on ESX2 going over the interconnect, then don't do anything. 

If you are experiencing performance issues than you have a few options.

1.  Connect ESX2 to FAS1

2.  Get a switch so that ESX2 can connect to either

3.  Move the LUNs that ESX2 uses over to FAS2.

But, again, if you are happy with the performance on ESX2, you don't have to do anything.

l_augeard
9,760 Views

-Yes but if I connect to node 2 i receive everyday message for autosupport ?

-And if I connect ESX1 and ESX2 to node 1 ?

ogra
9,760 Views

Hi,

I think we are going closer...

If I understand it correctly...you have 2 ESX servers ( Single FC Port each ) and FAS2020A ( with 12 internal Disks).

If yes, then the simple answer is to connect both ESX servers to a single storage  ( say 1st ) controller who owns the max. capacity. ( total 8 disks).

This will ensure your maximum LUN's would be going through optimized path.

Now for the other Controller ( say 2nd ), you can leave that for the time being if you are okay with the storage you provisioned from the 1st controller.

You can then plan for the FC switch later on, and re-organize things.

-Bakshana

l_augeard
9,019 Views

Yes it's ok for me :

node 1 :

2 FC

ESX 1 and ESX 2

node 2 :

2 FC but not use, just for failure ?

In this solution, if node 1 crash, I change FC cable manually to node 2 ?

ogra
9,759 Views

Hi,

Well Adam wants to let you know the data path is through non-optimal path.

In either case if you like to have the current performance you are good to go.

Simple answer to this is to get a switch.

Thanks & Regards,

Bakshana Ogra

+91-9717178878

Sent from my Google Nexus, please excuse typos

l_augeard
9,759 Views

"

Well  Adam wants to let you know the data path is through non-optimal path.

In either  case if you like to have the current performance you are good to go.

"

how it ?

Public