2011-01-24 09:10 AM
I have seen this before in older ONTAP and thought it had been fixed...but maybe it was never considered a BURT since an easy workaround...but we just noticed this behavior in ONTAP 8.0.1 GA.
1. Customer has existing 1TB SATA drives in aggregates with 2 spares.
2. They add 42x 2TB drives. There now have 2x 1TB spares and 42x 2TB spares.
3. They create an aggregate of 20x 2TB Drives to create archive volumes. "aggr create aggrname -r 20 20"
4. ONTAP chooses the two 2x 1TB and 18x 2TB disks for the aggr…leaving all 2TB spares. The aggregate had a mix of 1/2TB disks and no 1TB spares for existing aggregates. So now the 1TB aggregates have to spare and right size 2TB drives to 1TB...and not an optimal layout for the new aggregate.
An easy fix...We destroyed the aggregate and used the syntax "aggr create aggrname -r 20 20@1655" to force all the same drive size in each aggregate with spares of each drive size available. Luckily this was a new aggregate, not an add to an existing aggregate...not a typical PSE issue since most use "-d x.xx.xx" or "@size" in the syntax...or for System Manager they could choose drive size..but for average users on the CLI, it would be nice if ONTAP would choose all the same drive size knowing only that many of that drive size are available.... especially when it has 42x 2TB to choose from..instead of assuming mixing drive sizes. Not a big deal, but one that could cause a lot of hassle since users/customers may get themselves into mixed drive size aggregates with this behavior...and the only way to fix it is to migrate the data and destroy the mixed drive aggregate.
Anyone else notice this behavior or have issues with it? Again, not a show stopper but have to be careful if customers use the CLI to create aggregates and they have mixed drive sizes.
2011-01-24 12:47 PM
Yep had the same problem before Christmas on a FAS3140 pair running 8.0P1. Waiting for our next DS4243 purchase in order to fix it. System was full of live data before anyone noticed what had happened. :-(
2011-01-24 10:21 PM
Folks, you two are probably in the best position to open a bug ☺ This has minor impact for new configuration (except additional couple of hours for zeroing disks); but for existing system it could cost quite a bit time and efforts.
2011-01-24 11:37 PM
Goof point.. A way to help fix it. But with 8.0.1 volume data motion is for san volumes only.. It isn't supported for nas yet. and assumes there are enough free drives for a new aggr to migrate to.
Typos Sent on Blackberry Wireless
2011-01-24 11:49 PM
Yea, thats the bad point here, its not supported yet. For NAS i usualy go with a quick cifs shares -delete/-add and if im fast enough most users wont realize something went off/online. Can be a pain with locked files or vms and all that stuff tho.
2011-01-24 11:54 PM
cifs won't handle a move of a network interface, even with a cluster takeover or giveback either... it is supposed to be fixed in smb 2.x where the durable file handle isn't at the tcp layer...hopefully soon.
Also, while I am a huge multistore proponent, there are adoption issues with no gui managment even in the new system manager... hopefully that is resolved before cluster mode converges with 7mode..
To the point of this issue, even with all the workarounds, customers with mixed drives should be aware of this issue and use "-d" or "@size" when creating or growing aggrs so they don't get stuck...we had this attached to a case back in an earlier 7.x release and don't remember if it was fixed, if a BURT was created, if it was was fixed, or if the issue is new again to 8.0.1... time for a new BURT or KB if one doesn't existing already.
2011-02-21 11:33 AM
Good afternoon all -
On a 6080 using 2TB SAS drives it will only let you create an aggregate of 88.8TB of Usable Space. That would be 112TB raw and I'm looking to have 100TB of usable space per aggregate. I already set the aggr nosnapdir on and aggr snapreserve to 0%.
Any other tricks I can try or is this the max? This is using raid dp raid group size 16. Tried it with raid 20 and still got the same result. 71 drives equal 88.8TB usable space across 5 raid groups. Again any tricks I can try please email me. Scott.Borden@netapp.com.