2011-12-28 02:23 AM
I think, it's a beginner question but I didn't find any answer, yet.
We 've got an new FAS2040 (active/active) and it is our first NetApp-system. So we've not a lot of experience with those systems.
With our old - no NetApp - systems it was easily to calculate the among of disks and enclosures, when i knew the capacity for the shares and luns; eg. Raid 5 - one disk for the parity an so on.
Here in the new world, it's not so easy - I think. When I look at our shelf I kann see:
- 22 disks (Disk Count)
- each disk with "Disk size" = 546,88GB
- 2 disk for Raid-DP (I thought ?!)
- 20 disks for data (20 * 546,88GB ~ 10,68TB usable) ????
When I look in OnCommand=>Storage=>Aggregates I only see "Total Space" = 8,22TB !! Where is the rest ???
"Disk Layout" shows 2 disks with "Raid type" - "dparity" plus 2 disks with "Raid type" - "parity" - why? ==> 4 disks for the Raid-DP ??? 18 disks for data (18 * 546,88GB ~ 9,8TB) ???
How it is possible to calculate the usable capacity correctly? - Is there a formula for this?
Solved! SEE THE SOLUTION
2011-12-28 08:00 AM
Based on what you have said above and making some assumptions, this is what may have happened.
The default raid group size for SAS (I'm assuming) drives is 16 disks (14 data + 2 parity for ONTAP 8.x), this means that two raid groups would have been created if you used all 22 drives hence you have 4 parity disks. 18 data disks of 546.88GB=9843.84GB, minus approx 10% WAFL overhead=8859.45GB, minus 5% for aggregate SnapReserve=8416.84GB, divide by 1024 to convert to TB=8.22TB.
If you have used all your disks then you have no hot spare disks.
You say that this is an active/active configuration but all 22 disk appear to have been used by one controller; is the second controller not in use or does it have access to another disk shelf?
2011-12-29 11:00 AM
NetApp SE's has access to a handy dandy spread sheet to figure this out, I would aks yours for it. By default the root aggregate is shipped with RAID DP configures, often in a 4 disk RAID group for each head. That is two data disks, two parity drives. On a small system like a 2040 we generally just add disks to the root aggregate at first and may create other aggregates with new disk shelves, or in our case as mirror destinations we have needed to create other aggregates that are 64 bit.
On a 24 Disk 600GB DS4243 we get 10TB useable.
I am not sure what system manager is reporting in your case. I generally use the 60% rule for useable space for RAID-DP aggregates but it can be better than that if you have larger RAID groups than default. ~60% X marketing size = useable space. There are some tricks to get a little more out. By default 5% of the aggregate is set aside in the aggregate snapshot reserve. Unless you are are using syncromous mirroring or plexing your aggregate this 5% is wasted space IMO and you can delete the snap reserve for the aggregate and unschedule the aggregate snapshots. "snap reserve -A aggr0 0" "snap sched -A aggr0 0 0 0". "df -Ag" willl tell you exactly how many GB you really have.
2011-12-30 04:04 AM
I had the same problem but I use (in my mind) helpful picture.
4 parity disks in one Aggregate? In my mind it's possible when you have more than one Raid-Group in the aggregate or the aggregate was mirrored.
So you can check them on cli: # aggr status -r aggr0
Regards and a happy new year
2012-01-10 07:01 AM
thank you very much for your answer. That's it.
With the tip from aborzenkow and your explanation about the raid group size I think I understand it know.
The system has two shelves - with 2 hot spare in every shelf - and every controller uses 11 disks from every shelf for one aggregate. So that's why the aggregate needs 4 disks for parity and "only" 18 disks for data. The rest is to calculate with the formula in your answer or in the KB ID 3011274.
Thanks a lot.