ONTAP Hardware

Fas3220 HA + 3x DS2246 best practice for aggr and rg

Alfs29
4,196 Views

Hello,

 

Given:

1x FAS3220 HA, 7-mode, 8.2.5

3x DS2246

48x 600GB SAS

4x 400GB SSD

 

(3rd shelf with 24xSAS is arriving at the moment)

 

Current HDD config: 

Contr-High:

shelf-1 2x Raid-4 aggr0 + 2x spare

shelf-2 3x Raid-4 aggr1-SSD + 1x spare

 

Contr-Low:

shelf-2 2x Raid-4 aggr0; 16x Raid-DP RG=16 DATA-AGGR +2x spare

 

So in text form ... 1 shelf fully popolated with aggr0 + small 4ssd + data-aggr AND second shelf with 4 drives for second controller aggr0 .

 

 

What im trying to do:

Im planning to add another shelf with 24x600GB SAS drives.

 

Questions:

1) Config advisor whines about SAS and SSD beeing in same shelf ... ok, when i will have 3 shelfs i can move SSD away to SSD only shelf. Is that really needed or in case of SSD it would be better to have even separate stack of shelfs for SSD? SSD aggregate belongs to controller which does not own data-aggregate.

2) How to better organise 2 shelfs with 24 600GB SAS drives each? Each of those shelfs mush contain aggr0 and some spares. 

Is it a good idea to have 2 shelfs with 2 drives (each) for aggr0 (Raid-4) + 20 (each) data drives (raid-dp) and 2 (each) spares?

20 RG1 +20 RG2 data drives jained in DATA-AGGR. Data-AGGR belongs to controller which doesnt own SSD-aggregate. 

Yes, i know that recommended/default RG size is 16 .... 

 

7 REPLIES 7

AlexDawson
4,155 Views

Best practice is to have SSDs on a seperate stack - ie, a shelf with its own connections back to the controller. If you don't have sufficient connections, or shelves, it's not a big deal, especially as it is only 3 active SSDs, and it can be ignored.

 

As you're running 7-mode, there is not an absolute requirement for a dedicated root aggregate- so I would just have one aggr on each controller - one all SSD400, one all SAS600, and I'd go for RAID-DP with no spares instead of RAID4 with one spare. So the SSD node with ~800GB of capacity on SSD, and the SAS node with an aggr of 2 x 23 disk RAID-DP RGs - so 42 x 600GB (~23TB) of capacity and still two spares. Once your third shelf arrives, I'd consider putting it on the SSD node with a 22 disk RAID-DP aggr + 2 spare and moving the root vol onto it.

 

Hope this helps!

Alfs29
4,125 Views

@AlexDawson wrote:

 

As you're running 7-mode, there is not an absolute requirement for a dedicated root aggregate- so I would just have one aggr on each controller - one all SSD400, one all SAS600, and I'd go for RAID-DP with no spares instead of RAID4 with one spare. So the SSD node with ~800GB of capacity on SSD, and the SAS node with an aggr of 2 x 23 disk RAID-DP RGs - so 42 x 600GB (~23TB) of capacity and still two spares. Once your third shelf arrives, I'd consider putting it on the SSD node with a 22 disk RAID-DP aggr + 2 spare and moving the root vol onto it.

 


This will not work!

Each controller needs root aggr! at least aggr with root vol on it. You propose 1 ssd aggr and 1 sas600 (2x23hdd) where will root vol for other controller be?

Wasting ssd aggr for root aggr is not a good idea i guess 🙂

 

We agree on separate shelf for 4 ssd aggr. 3+1.

 

How to split 2x24 600sas ?

Maybe on each shelf:  root aggr 2 x600sas raid-4. each shelf contains root aggr for one controller +1 spare (it is 3 hdds in total )

And then i make data-aggr from both shelf 600sas. data-aggr raid-4 or dp (consisting of sh1-rg1 20sas600 + sh2-rg2 20sas600) +1 more spare on each shelf

 

Of course i could maximize data-aggr by adding root vol of one controller to it, but that wouldnt look nice any equal on both controllers.

Nowdays used 600sas shelfs come cheap from ghetto shop 😉 

 

I guess in ideal world i would need 8shelfs containing 2 drives from aggregates in DP (assuming rg of 16 including parity). root aggr sitting in different shelfs of course. Then anything can fail but system will still run 🙂

It is like splitting aggregates verticaly not horizontally ... if you understand what i mean.

aborzenkov
4,118 Views

@Alfs29 wrote:

This will not work!


It will.

 


@Alfs29 wrote:

Each controller needs root aggr!


You know what 7-Mode is, do not you?

AlexDawson
4,095 Views

@Alfs29 wrote:

@AlexDawson wrote:

 

As you're running 7-mode, there is not an absolute requirement for a dedicated root aggregate- so I would just have one aggr on each controller - one all SSD400, one all SAS600, and I'd go for RAID-DP with no spares instead of RAID4 with one spare. So the SSD node with ~800GB of capacity on SSD, and the SAS node with an aggr of 2 x 23 disk RAID-DP RGs - so 42 x 600GB (~23TB) of capacity and still two spares. Once your third shelf arrives, I'd consider putting it on the SSD node with a 22 disk RAID-DP aggr + 2 spare and moving the root vol onto it.

 


This will not work!

Each controller needs root aggr! at least aggr with root vol on it. You propose 1 ssd aggr and 1 sas600 (2x23hdd) where will root vol for other controller be?

 

I guess in ideal world i would need 8shelfs containing 2 drives from aggregates in DP (assuming rg of 16 including parity). root aggr sitting in different shelfs of course. Then anything can fail but system will still run 🙂

It is like splitting aggregates verticaly not horizontally ... if you understand what i mean.


If you closely re-read my original statement, I specified that a dedicated root aggregate was not needed for 7-mode, and what I proposed will indeed work. As you rightly picked up, you need an aggregate flagged root and holding the vol0 for the controller, yes, but on a small or non-prod 7-mode system, it does not have to be dedicated. If you have many 600GB DS2246 shelves, then you can follow best practices and have dedicated root aggregates, it is true. But I wouldn't do that at the expense of having to go RAID4, but we support it - so if your risk analysis says that is the best choice for your workloads, please feel free to do so.

 

Best practices always need to be weighed against the downsides of following them - for example I've only ever seen one HA pair out of hundreds I've dealt with which have vertically striped RAID groups, across ~20 DS14 shelves. And we would also say it is a best practice to not buy our equipment on the grey market, for example 🙂

Alfs29
4,074 Views

Yes, i more or less 😄 know what is 7-mode and yes i know and agree that having separate root aggr is NOT REQUIRED but very nice to have to be much more flexible afterwards.

 

What i said "It will not work" was about where to put root vol/aggr for OTHER controller, if all drives belong to contr-1 ???

 

 

 

Gray market ... i know & i would love to have support etc ... i simply cannot afford "fresh, just from farmers field" stuff ... 

AND not everything that shines is gold!

This monday i inquired our Netapp partners Proact and iPro about NVRAM battery price & availability .... untill now nobody of them even answered.

 

AlexDawson
4,007 Views

I originally said "one aggr on each controller - one all SSD400, one all SAS600" - so you have two controllers, lets call them A and B.

 

On controller-A, all SSD, and controller B, all SAS.

 

For the NVRAM battery for a FAS3220, you are after our part 111-00750. I can't comment on why you might not be getting a call back.

Alfs29
3,988 Views

Yes, and i said that im not wasting ssd space for root vol, they cost more than another ds2246 full of 600sas.
On the other hand maybe it is a good idea to get 2 shelfs or 1 shelf to put both root aggr on separate shelf and have no problem adding full 24drive shelfs (rg23) afterwards ... sure followed by reallocate -f <vol>.

 

So i total it will look like: Shelf1 (root1_aggr (3+1 sas600), root2_aggr (3+1 sas600), ssd_aggr (3+1 ssd400)), Shelf2 (data_aggr rg1 (of 23 sas600) +spare), Shelf3 (data_aggr rg2 (of 23 sas600) +spare), and so on .... shelf4, shelfX.



What are limitations for aggr, vol, lun with 3220 and 8.2.5 7-mode?

Is there any reasonable cause (except limitations) to have more than 1 aggregate for data?

 


How often in your practice have you seen failed shelfs?
I mean really failed .... with multipath cabling from 2 chips each controller beeing in place etc ...

Thanks for PN. I found 271-00027 ... probably older version? Anyway it looks like 3x 18650 li-ion accu inside. 10$ Sanyo accu each + macgaiver certified tape afterwards should do the trick. As you can see, my black magic on netapp stuff has high standards ... no china 1$ accus 😉 Buying another 6y old original battery pack in ghetto shop is worse solution than replacing accu elements in it.
Btw accu charge bug is still in place with sp frmware 1.4.1 .... promised to be fixed with 1.3.1

Public