VMware Solutions Discussions
VMware Solutions Discussions
Hello ....
FAS2020 In a model with 2 controllers, we have 4 interfaces. 2 For each controller.
The question is whether access to the disks on a controller is just as effective interfaces for accessing the other controller ... eg if you configure avif between cards of each controller e0a? ..
thanks
Solved! See The Solution
Hi,
You are talking about IP interfaces, right? Two controllers are completely separated & you cannot configure a vif spanning two controllers
You can e.g. have one vif on each controller, spanning both 'local' ports - vifs will act as a fail-over target for each other, should any of the controllers fail.
Regards,
Radek
Hi,
You are talking about IP interfaces, right? Two controllers are completely separated & you cannot configure a vif spanning two controllers
You can e.g. have one vif on each controller, spanning both 'local' ports - vifs will act as a fail-over target for each other, should any of the controllers fail.
Regards,
Radek
Hello .... thanks for the reply ....
So if I have two cluster controllers configured in active / active, and the 4 interfaces should be configured to get the most performance for iSCSI invmware Tolerance ?....
If I create 2 vif, each with 2 cards of each controller and controller failure "number one" or any of its interfaces, are your disks / Aggregate / LUN 'sdriver calls that are handled by the other controller? ... if the answer is yes, then under normal conditions, access to disks / aggregate / LUN 's is balanced 50% for each controller? ...
Sorry if dumb question: ((
Thanks again .....
So if I have two cluster controllers configured in active / active, and the 4 interfaces should be configured to get the most performance for iSCSI invmware Tolerance ?....
It's a bit complex question - have a look at this threads:
https://communities.netapp.com/message/49195#49195
https://communities.netapp.com/message/42628#42628
under normal conditions, access to disks / aggregate / LUN 's is balanced 50% for each controller? ...
That's not true - disks / aggregates / LUNs are assigned exclusively to one controller only, so 100% traffic goes via 'owning' controller & its ports (there is one corner case when FC is misconfigured & traffic traverses cluster interconnect, so the partner controller serves data, but it is not a desired situation by any means & it is not applicable to iSCSI)
Regards,
Radek
Hello ....
I have read the referenced documents (except https://communities.netapp.com/message/42628 # 42 628) I have no permission to view ...
I start enterder some concepts, but I have the following questions:
1 .- For iscsi in vmware traffic, there are two possibilities: using LACP vif combined together, or use the MPIO vmware interfaces combined with single mode (no vif). In both cases, using vif or using simple interfaces, you can specify the interfaces "partner" we have. These interfaces "backup" can be the other controler? ... That is, it may indicate that the interfaz e0a controler "A" is the backup of the interfaz e0a controler "B "?...
If yes, then the access to the disks on the controller "A" would be the interfaces of the controller "B" ... no? If not, what sense has the cluster active / active?
2 .- Given that 3 esx servers that simultaneously access the iscsi cabin, and having only 2 interfaces per controller, it makes sense to spend 2 network cards in each ESX server ?.... Ultimately, it is only 2 physical cards "receptor" on the FAS2020, and instead are 6 physical cards (2x3esx) that "require data "....
Thank´s
That is, it may indicate that the interfaz e0a controler "A" is the backup of the interfaz e0a controler "B "?...
Yes, that's correct. Both controllers have physical access to all disks, but only one of them ('owner') serves the data under normal circumstances. If the 'owner' fails, its virtual instance is run in a memory of its partner, hence there is no interruption to service.
Ultimately, it is only 2 physical cards "receptor" on the FAS2020, and instead are 6 physical cards (2x3esx) that "require data "....
Well, normally you allocate only a subset of physical ports on an ESX host to iSCSI traffic (e.g. two) - the rest is used for different purposes, like VM traffic, VMotion, etc. (separation can be achieved via multiple virtual switches)
Regards,
Radek
Hello ....
First many thanks for answering ....
So what is the fundamental difference between having the cluster active / active or active / passive ?.... If it is always "just" a controller that manages its own records ?....
Returning to the iscsi with vmware connection, if defined in the interface controller e0a "A", and tells you, "" Shared
On takeover, This Will assume the interface address of ITS partner interface, in Addition to address current ITS. "Indicating that the interface is e0a parnter controller" B "and vmware-side, connect the target of the two interfaces, and active access policy in vmwareRobind Round, what happens when the connection from vmware to the interface makes this as a partner ?....
Not be better stated as the e0a parner interface, the controller e0b the same ?...... in this case, since vmware iscsi connectionswould be made alternately to both interfaces "active" in the same controller .... no ?....
Thanks again ...