ONTAP Discussions

FAS2552 Hybrid with additional disk shelf optimal aggregate structure

vlsunetapp
2,326 Views

Hello, ONTAP gurus!

Help find the best layout for this equipment and load:

- original FAS2552 dual-controller DS2246 with 4x400GB SSDs and 20x1TB SAS - shelf 1.0

- later added additionsl disk shelf DS2246 with 24x1TB SAS drives - shelf 1.1.

 

Main goal is to get maximim storage flexibility and feature usage (Flash Pool) for small data center with following loads:

- user data access via CIFS/NFS

- small vitrual cluster with 2-4 Hyper-V nodes with SMB 3 data VM storage.

 

This is current ONTAP 9.8 configuration recommentation and system state when use System Manager/ONTAP auto-configuration:

SSDs 1.0.0-1.0.3 - untouched by recomendation and not used at all, but want to use Flash Cache for user data access;

HDDs 1.0.4-1.0.23 partitioned to store node1/2 root agregates and have raid group 0 for each node data aggregate.

HDDs 1.1.0-1.1.19 equally assigned to each node and have raid group 1 for each node data aggreagate.

Two aggregates auto created with two raid groups each for node 1 and node 2 - see picture

vlsunetapp_0-1636240231714.png

We have 4 unused SSDs and 4 HDDs reserved for spares (hm, why not 2?).

 

Each created data aggregate on node 1 and node 2 have two RAID groups rg0 (10 partitions on shelf 1.0)and rg1 (10 drives on shelf 1.1).

 

Auto-partition function says that in can not reccomend what to do with SSDs and have created total two new data aggregates (1 for each node).

 

My questions are:
1. Can we manually switch/recreate such configuration for aggregates without conflicting optimal ONTAP recommendations:
- create SSD pool using 4xSSDs;
- create for both nodes two different partitions for used data access and some cata center loads:
aggr_NodeA_01 - with SSD flash pool caching for user data access via CIFS/NFS - 10 partitions on shelf 1.0;
aggr_NodeA_02 - Hyper-V VM storage with SMB 3 access - 10 drives on shelf 1.1;
aggr_NodeB_01 - with SSD flash pool caching for user data access via CIFS/NFS - 10 pertitions on shelf 1.0
aggr_NodeB_02 - Hyper-V/SQL etc. storage for data center loads (NAS or SAN) - 10 drives on shelf 1.1.
2. Does configuration in (1) conflicts with optimal NetApp raid group values or creates some performance issues?
3. Because of 4 aggregates we now have some flexibility to turn on/off Flash Pool for aggregates on shelf 0 (we have to recreate aggregate to turn off Flash Pool, so its flexibility to move volumes) - isn't it?
4. Having experience with traditional RAID controllers, I want to know how data volume stored on default created aggregate which consists of two raid-dp groups rg0 (10 partitions) and rg1 (10 drives) - does volume on such aggregate use only one rg or both?
5. Are there any chances to use two more spares for data?

 

Thanks for any help!

 

 

3 REPLIES 3

Fabian1993
2,277 Views

Hi,

my suggestion would be that you use your internal Disk (20) which are using ADPv1 in one Aggregate, max RG Size is 24, you can set up the Aggregate with 18/19 Disk (if you leave 2 Spare Disk, you can use Disk Mainteance Center).  From the Shelf I woud also create only one Aggregate with 22/23 Disk  (if you leave 2 Spare Disk, you can use Disk Mainteance Center). Data_Aggr1_Owner-Node01  and Data_Aggr2_Owner-Node02, with that you have the best Capacity/Performace mix. With the 4xSSD you can set up Storage Pool with Allocation Units, than you are flexible.

 

vlsunetapp
2,236 Views

Thank you, Fabian1993. That is good alternative to auto sugested by ONTAP.

Why do you think about best performance? Is that because we have one raid-group per aggregate? While we lose 4 drives for spares in both situations, I guess that main factor to select original or your configuration is performance. One more factor is how to manage potential eddition of 1 disk shelf in future?

Is there any idea about how ONTAP manages volumes on aggregates which are created using two raid groups: are volumes placed on one raid goup in aggregate or ONTAP uses both raid groups per volume?

More answers, more questions:

Why recommended max RG size is 24, but we use only 18/19 drives. Does it initially determines how data is places on RAID-DP or hust limits max raid group size?

How spares are reserved for each cluster node - each aggreagate level or node level, i.e. spares shared by all node aggregates?

 

Thanks again for your reply.

 

paul_stejskal
2,194 Views

1) For best performance, you want as many data drives/partitions as possible.

2) A new shelf will likely form a new raid group since it won't have ADP.

3) RGs are only for grouping disks together. They are seen as one logical bucket of storage to volumes. Data is spread evenly across ALL data drives regardless of RG (except for SSDs in FlashPool).

4) Because you have spares.

5) Usually depending on system size it is 1-2 per disk size per node, or more.

Public