ONTAP Discussions

LIF and Aggregate best practice

2255_SAATVIK
5,223 Views

 

Hello Guys,

 

Can anyone help me with this please?

 

 

LIF 

 

I am configuring  2 node cluster FAS2650 each node has 4x10g - e0c,e0d,e0e,e0f 

2xSMB e0c and e0e teamed as a0a for SMB for both nodes

2xiSCSI e0d and e0f teamed as a0b for iSCSI for both nodes

1 g x management: using e0m for both node management

Can I use e0M for node management and cluster management?

 

I am looking to configure 2 SVM (1 iSCSi and 1 for SMB) I have  VLAN 14 and 15 for SMB VLAN 18 for iSCSI. what is the best practice to create broadcast domain failover group and LIF?how do I assign to the SVM following the best practice.

 

Aggregate

 

4 X SSD  and 44 X SAS

FAS2650 with 4X SSD and 20 X SAS and another 1 disk shelf with 24 X SAS

 

Planning to create 4 X SSD flash pool RAID4 (Using this create hybrid aggregate)

 

What would be the best practice to assign the disk to each controller and best practice to create the aggregate?

 

Regards

 

 

 

 

 

3 REPLIES 3

GidonMarcus
5,137 Views

Hi

 

for the iSCSI i suggest you don't LACP the two NICs. from: https://www.netapp.com/us/media/tr-4182.pdf

 

MPIO.png

 

in ideal world you would have have two separate switches sets for but in your case as i assume the servers have similar topology. i would just create 4 LIFs  and assign them nativly without LACP on the way. if you do go with the LACP approach. you also loss performance as you have only single active iSCSI queue. and you bound to a single physicals interface. (oh. and to prevent that on the host side if they already configured for LACP. set the LACP path choice to be based on XOR and to by specific src or dst mac).

 

as for the cluster mgmt LIF. yes. it can sit with the node mgmt LIFs on the e0M.

 

for the AGGR. i suggest you would use the SSD ones only for the iSCSI workload and not waste it on the SMB (unless you know about specific use case that will benfit from it). also try to use ADP for the root volumes to save 6 disks worth of space.  you didn't mentioned your capacity requirements for each workload so i can't help much with the disk count and raid size calculation.

 

G

 

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

2255_SAATVIK
5,085 Views

Hello 

 

Thank you very much for the reply.

 

Yes, they have already created the LACP, I will go with your suggestion ( (oh. and to prevent that on the host side if they already configured for LACP. set the LACP path choice to be based on XOR and to by specific src or dst mac).

 

Total number of disk and size 

 

44 X 1.2 TB SAS
4 X 960 GB SSD

 

FAS 2650 2 node HA  + 1 X DS224C

 

Regards 

GidonMarcus
4,937 Views



Hi


i see i had a typo in the line you quoted :

" set the LACP path choice to be based on XOR and to by specific src or dst mac"

it meant to be:

"set the LACP path choice to be based on XOR and not by specific src or dst mac"



About the sizes. what i want to understand here is how much you need for CIFS and how much for iSCSI so we can try and split it across the two nodes.

if  you are unsure - i think we can maybe do an "active-passive" configuration and get a pretty reasonable result  (i haven't checked all this - as i don't have similar system. but from what i see online it should work):

 

SYS.png



some notes i taken during the prep:

 

1. you re-initialize the system with only 22 SAS disks in the embedded shelf. that will create the following ADP configuration:


ADP.png

 

2. after it initialize you adding the other shelf also with 22 SAS disks.

3. you assigning all the data  paritions to one node.so it looks like that: (but with more disks)

active_passive.png

4. you creating an AGGR with raid size of 21. with all the available SAS disks.  this is not great size for RAID. but it's supported:

 

https://library.netapp.com/ecm/ecm_download_file/ECMLP2496263

raid alowed.png

 

 

raid szie.png

 

5. you attaching the other shelf with also 22 disks, allowing raid mix sizes and assigning them to same node and adding 21 of them to the AGGR.. this will create mix size aggr (as the partitioned disks are smaller)  but as long it's in two separate RAID groups you don't loss capacity:


https://community.netapp.com/t5/FAS-and-V-Series-Storage-Systems-Discussions/Mixed-disk-size-aggregates/td-p/121388

 

6. now you adding the 2 SSD to each shelf and create the flashpool (the reason i'm splitting them is that i want identical count of SAS disks in each raid group, and each raid group have "difrent" disk sizes becuse of ADP)

 

 

 

 

 

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
Public