ONTAP Discussions

setup and reallocation

RAFWOZ
2,740 Views

I'm relatively new to NetApp, most of my storage experience is coming from working for a decade with HP MSA series so obviously there is lot of differences in concept as well how things work, we bought few NetApp FAS2240s tried to learn it by installing first one myself and engaged consultant from local NetApp partner company to validate our installation and tell us what we did wrong and why, all came out we did all OK for what we want to use it end up changing nothing in out configuration. Year later we found that we running into performance issues in our branch locations where we only have FAS2240 with single shelf of 24 3TB disks, performance issues are related to speed of the disks and IOPS we can provide. one of the mistakes we did was distribution of 24 disks evenly between both filers, essentially what we end up doing is have 12 disks owned by each filer, 3 of 12 used for system aggregate, 8 of 12 for data aggregate and remaining 1 disk for spare... in this config we limited ourselves to 8 disks and whatever IOPS it can provide, so now we are in need to delete all volumes and aggregate owned by Filer2 (no problem with this since in relatively short time since we bought this FAS2240 we have no data there yet or data that can be moved) and reassign disks to Filer1, add them to data aggregate and reallocate

 

Here is what I have done so far:

- delete LUNs, volumes, data aggregate own by Filer2

- reassign 9 disks (8 previously in data aggregate + spare) to Filer 1

- Above makes data aggregate own by Filer 1 bigger made of 17 disks, this should help with IOPS

- ran reallocate start -p -f /vol/datavol ... BTW this process takes forever and I don't see much options to monitor it other than aggr status -r (takes days) Here is what I cannot do and this is my first question:

- looks like I cannot take system vol that is own by Filer2 offline and because of that I cannot add those 3 remaining disks to data aggregate own by filer 1 to make it 20 disk aggregate, does Filer2 that is part of HA need system vol? I have another NetApp with two shelves which is configured differently where one shelf is own by Filer1 and other is by Filer2 so in that case this was configured day one as 3 (sys) + 20 (data) + 1 (spare) but in case where I have only one shelf do I need to give up 3 disks for system aggregate/volume or can I have 3 (sys) + 20 (data) + 1 (spare) owned by Filer1 and Filer2 be still part of HA to take over when Filer1 dies? I contacted NetApp support but they keep telling me that they cannot advise and we should involve SE and account rep to evaluate what we need in order to avoid performance issues like that, I have no problem doing it but I'm really tired that I cannot get answer to I think basic questions related to NetApp architecture and essentially how it works and what can it do or cannot do. I need to understand what is doable so if I engage consulting company to do something for us that we will pay for I need to know if they do this right and if there is better way of doing it, problem with all consulting services is that someone will come in do something to the best of their knowladge which does not mean best it can be done for future growth and since I don't know any better how I suppose to know this is "best way" here are some questions I asked NetApp support and cannot get straight answers, can someone help me understand how this works?

 

  1. How many shelves or drives we can have in single aggregate?
  2. It is not recommended to mix drives type is there different number of drives in one aggregate based on type of drive i.e. SATA vs SAS vs. SSD?
  3. As far as I know I can have max 28 SAS drives in RAID-DP and 20 SATA drives in RAID-DP, let assume I purchase 5 shelves 24 SATA drives each why would I want to configure 5 shelves as single aggregate (this goes back to question 1 not sure how many shelves I can have in aggregate) and then 5 RAID-DP groups since vs. 5 aggregates and 5 RAID-DP?? Would single aggregate help with performance at all? Correct me if I’m wrong but I think it would not since total IOPS would be for SATA 75 x number of drive in RAID
  4. Above question triggers following if we have RAID-DP built of 20 SATA drives and load on volumes is exceeding total IOPS provided by this config what would be recommended solution? Get another shelf and distribute load? replace this shelf with faster? can volume span accross RAID groups


 

 

 

 

1 ACCEPTED SOLUTION

rwelshman
2,700 Views

Every filer has to have a system vol (vol0) and the aggregate must have at least 3 disks. So filer2 would have to have 3 disks / one volume. Even if it is only going to be used as a "passive" failover node.

 

The size of the aggregate depends on the filer model and whether you are using 32-bit or 64-bit aggregates, you can find the maximums in the details about the hardware and data on tap versions manuals.

The aggregate should only be SATA or SAS disks, not mized. You can create an aggr of SSDs, or use hybrid aggregates to mix SSD / SATA or SSD / SAS, but the SSD would be used as cache only in that situation.

The IOPS available in an aggregate is over all disks in all raid groups in that aggregate as the data is written to any and all raid groups for all volumes in that aggregate. So the more disks you can put in a single aggregate the better performance you can get for the volumes in that aggregate.

 

View solution in original post

1 REPLY 1

rwelshman
2,701 Views

Every filer has to have a system vol (vol0) and the aggregate must have at least 3 disks. So filer2 would have to have 3 disks / one volume. Even if it is only going to be used as a "passive" failover node.

 

The size of the aggregate depends on the filer model and whether you are using 32-bit or 64-bit aggregates, you can find the maximums in the details about the hardware and data on tap versions manuals.

The aggregate should only be SATA or SAS disks, not mized. You can create an aggr of SSDs, or use hybrid aggregates to mix SSD / SATA or SSD / SAS, but the SSD would be used as cache only in that situation.

The IOPS available in an aggregate is over all disks in all raid groups in that aggregate as the data is written to any and all raid groups for all volumes in that aggregate. So the more disks you can put in a single aggregate the better performance you can get for the volumes in that aggregate.

 

Public