ONTAP Discussions
ONTAP Discussions
Hi,
we have a FAS3210 with 2x DS4243 Shelfs. One is a SAS shelf with 15k drives and in this case dosn't interest.
The other is a SATA shelf with 24x 7.2K 1TB drives, 22 drives are in a aggr0 (Raidsize 14) with 2 spares. We get a new DS4243 shelf with 24x 7.2k 2TB SATA drives.
We would put the 1TB and 2TB drives in the aggr0 together. Expanding the existing 2 Raidgroups with the 2 spares 1TB drives and put 2 new Raidgroups with 2TB drives in the Aggregate.
Is this a good idea or is it a non recommended configuration?
Thanks for answers.
Hi and welcome to the Community!
You really shouldn't mix different capacities in the same aggregate - in this case all 2TB drives would get downsized to 1TB, so a big waste of capacity!
Regards,
Radek
It isn't best practice to mix... but it is supported whether in the same raid group or a different raid group. The 2TB disks will get full utilization of 2TB... they downsize to 1TB if they are used as a spare replacement though. If 2TB is added to the same raid group as 1TB, then it will use a 2TB drive as parity (so that drive is lost) but subsequent data drives will get 2TB utilization...but uneven striping and not a best practice... We always recommend a new aggr for different size drives but in some cases it isn't an option.
The problem with using your 1TB spares is that then your 1TB will use a 2TB as spares and downsize those drives when used... the 2TB raid group in the aggr would get their full size but would be an uneven aggregate for wafl.
If 2TB is added to the same raid group as 1TB, then it will use a 2TB drive as parity
Do you mean that if I add 2TB drive to existing 1TB raid group it will automatically make it parity drive? I vaguely recall something like this already discussed and I think that this works (assuming this really works) only for RAID4, not for RAID_DP.
Do you have any pointers (I remember the was BURT dealing with this, may be KB or anything)?
Yes for raid_dp too. But not both parity from what I remember. Just one parity automatically is swapped 1tb for 2tb. Then future 2tb are full 2tb data disks.
Sent from my iPhone 5
How is it supposed to work with one parity being 2TB and second parity being 1TB? RAID_1.5? ☺
It was odd but something about diagonal working without the larger drive. It has been a while but expected the first 2 larger drives to swap for parity then 3rd and more for data. But remember only one drive swapping...Need to test in a lab or could on vsim to confirm. When mixing sizes, creating a new raid group is best but still not a best practice.
Sent from my iPhone 5
Just did my (late) homework - this is what Storage Subsystem Technical FAQ says about it:
"CAN DIFFERENT CAPACITY DISK DRIVES OF THE SAME DISK TYPE BE ADDED TO THE SAME AGGREGATE?
Yes. NetApp recommends adding disk drives into RAID groups of like-capacity disk drives in order to avoid disk downsizing.
CAN DIFFERENT CAPACITY DISK DRIVES OF THE SAME DISK TYPE BE COMBINED IN THE SAME RAID GROUP?
Yes. Disk drives of larger capacity than the smallest capacity disk drive in the existing RAID group will be downsized when added to an existing RAID group. If you are forming a new RAID group out of mixed capacity disk drives, then only disk drives selected to be the parity drives will not be downsized.
It is possible to add smaller capacity disk drives into an existing RAID group of larger capacity drives. The larger capacity drives in this case will not be downsized as they are already being used to store user data. This is a highly suboptimal configuration, and NetApp does not recommend adding smaller capacity drives into RAID groups with larger capacity drives."
Maybe Jay can clarify this. Or someone test in a lab with mixed drives. It should swap parity then utilize the larger drive size on data disks...suboptimal definitely.
Sent from my iPhone 5
I did a test in my VSIM with aggr1 with 24x 1GB drives and added 2x 9GB drives... it gave the downsize error (expected to convert at least one parity drive...or swap the existing parity for a larger data) but didn't actually do it... and used the full size of the 9GB drives as data drives. I don't see how this would work leaving the 1GB parity drive though... could be a bug or just a VSIM thing. I need to get on a live system and try the same thing.
vsim-7m-3*> aggr status -r aggr1
Aggregate aggr1 (online, raid_dp) (block checksums)
Plex /aggr1/plex0 (online, normal, active, pool0)
RAID group /aggr1/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity v4.16 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
parity v5.19 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.17 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.20 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.18 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.21 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.19 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.22 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.20 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.24 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.21 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.25 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.22 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.26 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.24 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.27 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.25 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.28 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.26 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.29 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.27 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.32 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.28 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.29 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
vsim-7m-3*> df -Ah aggr1
Aggregate total used avail capacity
aggr1 19GB 1388KB 19GB 0%
aggr1/.snapshot 0TB 0TB 0TB ---%
vsim-7m-3*> aggr add aggr1 -d v6.32 v7.32
WARNING! One or more added disks will be downsized.
Are you sure you want to continue with aggr add? yes
Mon Sep 24 17:19:20 GMT [vsim-7m-3:raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v7.32 Shelf ? Bay ? [NETAPP VD-9000MB-FZ-520 0042] S/N [33092113] to aggregate aggr1 has completed successfully
Mon Sep 24 17:19:20 GMT [vsim-7m-3:raid.vol.disk.add.done:notice]: Addition of Disk /aggr1/plex0/rg0/v6.32 Shelf ? Bay ? [NETAPP VD-9000MB-FZ-520 0042] S/N [33991113] to aggregate aggr1 has completed successfully
Addition of 2 disks to the aggregate has completed.
vsim-7m-3*> aggr status -r aggr1
Aggregate aggr1 (online, raid_dp) (block checksums)
Plex /aggr1/plex0 (online, normal, active, pool0)
RAID group /aggr1/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity v4.16 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
parity v5.19 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.17 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.20 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.18 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.21 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.19 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.22 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.20 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.24 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.21 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.25 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.22 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.26 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.24 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.27 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.25 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.28 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.26 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.29 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.27 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v5.32 v5 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.28 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v4.29 v4 ? ? FC:B 0 FCAL 15000 1020/2089984 1027/2104448
data v6.32 v6 ? ? FC:B 0 FCAL 15000 1020/2089984 9027/18488448
data v7.32 v7 ? ? FC:B 0 FCAL 15000 1020/2089984 9027/18488448
vsim-7m-3*> df -Ah aggr1
Aggregate total used avail capacity
aggr1 21GB 1400KB 21GB 0%
aggr1/.snapshot 0TB 0TB 0TB ---%
Disks are correctly downsized. Look at Used column.
Good second eyes. I looked at the disk 9gb first. Yes 19 to 21 so downsized to 1gb. Changed at some point but don't know when. If a new raid group is would mix and use full size from the FAQ but a bad idea all around to mix. Jay's storage faqs are spot on.
Sent from my iPhone 5
Nice one. A seemingly simple question led to a thorough investigation - that's why I love forums!
And BTW: now when John's profile is gone, we have the new Number One on the charts - congrats Scott!