ONTAP Discussions
ONTAP Discussions
Hi,
I am pretty new in netapp environnement and pretty lost.
Now i have to configure a iSCSi netapp cluster connected in DAS with two ESXi servers. You can find a litlle shema in attached piece.
My problem is i don't understand how to connect the netapp cluster to my Vcenter ?
I can't create aggregate in this situation right ? so how to connect all path to my Vcenter ?
Any help will be appreciate !
Solved! See The Solution
Here's the SAN config doc for some reading for you: https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sanconf/SAN%20configuration.pdf
Also, here's an example for the last time I did a config like this:
Controller 1:
SVM Logical Interface Role Status Network Address Current Port
iSCSI N1_vmhost01a_lif1 data up/up 10.10.10.20 e0c
iSCSI N1_vmhost02a_lif2 data up/up 10.10.11.21 e0d
Controller 2:
iSCSI N2_vmhost01b_lif1 data up/up 10.10.10.22 e0c
iSCSI N2_vmhost02b_lif2 data up/up 10.10.11.23 e0d
Host's each have two IP addresses too.
Host 1:
10.10.10.30 and 10.10.11.30
Host 2:
10.10.10.40 and 10.10.11.40
Each host needs access to each controller.
(Which looks like your drawing (I think))
example:
Host 1 port A-> NetApp-01 port e0c
Host 1 port B -> NetApp-02 port e0c
Host 2 port A -> NetApp-01 port e0d
Host 2 port B -> NetApp-02 port e0d
For Aggrs / vole / LUNs in this config... I’m going to assume you have 24 drives in there, you’d want to creat 2 aggrs, one on each controller. From there, the basic way is just do a volume / lun off each aggr and map to the hosts. Install VSC too, It’ll making mapping datastores easy.
and always test failover on each controller before it goes production.
Yes i connected my node like this.
Not sure about the aggr.
If i have node1 e0c e0d, i can't create an aggr cause ports don't go on the same server ?
I would go a step further. e0c/e0d are using the same ASIC. if the ASIC fails, both ports fail. I would use e0c to go to one host on both nodes and e0e to go to the other host on both nodes.
The best solution (as always in a case like this) is to get a small SFP+ based switch and just hook up with Twinax cables.
Be sure to:
If you are using current ONTAP software (like 9.7), it should allow you to create the aggregates automatically with the GUI (called Provision Storage, if I recall). As @SpindleNinja said, you should end up with two even-sized aggregates, one on each node.
disks and ports are independent of each other.
In the NetApp ONTAP world an aggr (aggregate) is a collection of disks that allows data to be written to them. And an aggr is owned by a storage node.
like TMAC said, Each port will have its own IP address, on both the storage side and host side. There’s no port trunking/lacp/binding etc with this config.
you want each host to be able to access each storage controller aggr.
Thank's to both of you.
iSCSi need IP address to work, so i have to create subnets for each phisicals connections.
I do it with LIF ? (like i said i pretty new and i have a lot to learn to understand well all things on Ontapp)
Let me correct myself:
when using Multiple Subnets, it is not a best practice to use iSCSI Port-binding. Only when all connections can see and talk to each other is port-binding to be be used. Sorry!
If the iSCSI is direct connected, you do not need to create subnets.
On the NetApp, Create the four LIFS (two per node) dedicated for iSCSI and no gateway.
On the ESXi side, create a vswitch for each subnet and place a vmkernel port there.
Here's the SAN config doc for some reading for you: https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sanconf/SAN%20configuration.pdf
Also, here's an example for the last time I did a config like this:
Controller 1:
SVM Logical Interface Role Status Network Address Current Port
iSCSI N1_vmhost01a_lif1 data up/up 10.10.10.20 e0c
iSCSI N1_vmhost02a_lif2 data up/up 10.10.11.21 e0d
Controller 2:
iSCSI N2_vmhost01b_lif1 data up/up 10.10.10.22 e0c
iSCSI N2_vmhost02b_lif2 data up/up 10.10.11.23 e0d
Host's each have two IP addresses too.
Host 1:
10.10.10.30 and 10.10.11.30
Host 2:
10.10.10.40 and 10.10.11.40
Perfect thank's.
I am gonna take some times to read and try to apply thoses advices.
I will tell you back if i can make it work !
Allright guys.
I think i have all setup, but now when i add an dynamic discovery i can't find my target.
On the netapp i have my 4 lifs for each ports in iSCSi attached to my SVM.
On each of my both ESXi i have two virtual switch with two VMkernerls.
I gonne make a litlle diagriam, it will be simplier to explain myself ...
On my netapp, i have created a Lun and a vol on my aggregate just to try to connect it to the ESXi.
I will send you the diagram tomorrow.
See u !
Everything is working now !
I forgot to start the iSCSi service in the SVM parameter.
Thank for your help !
No problem! Glad it worked out.
How many adapter iscsi in on host i need?(vmware)
min 2 - 1 per vlan/switch
https://docs.netapp.com/us-en/ontap/pdfs/sidebar/Considerations_for_iSCSI_configurations.pdf
This is the updated link to:-
Considerations for iSCSI configurations
ONTAP 9
NetApp
July 01, 2022
https://docs.netapp.com/us-en/ontap/pdfs/sidebar/SAN_configuration_reference.pdf
The updated link to the SAN configuration guide which includes the DAS configurations.
I think this configuration cannot talk each other in direct connection.
iSCSI N1_vmhost01a_lif1 data up/up 10.10.10.20 e0c
iSCSI N2_vmhost01b_lif1 data up/up 10.10.10.22 e0c
Host 1:
10.10.10.30 and 10.10.11.30
@Julien_Mos I want to know your IP configuration.
Hi all,
I'm also struggling to get iSCSI directly connected to work on a AFF-A150 unit.
Wired the nodes and 2 ESX hosts as follows:
node 1 e0c -> esx01 p1
node 1 e0d -> esx02 p1
node 2 e0c -> esx01 p2
node 2 e0d -> esx02 p2
It seems that I need 8 IP's, I configure iSCSI as following:
-Go to SVM's, open the first
-Go to iSCSI, click configure
-Then I need to fill out 4 IP's (2 per node)
For the second SVM I also need to fill out 4 IP's
In addition: how to best setup the broadcast domain(s) and port members?