ONTAP Discussions

8.0.1 GA - mixed 1TB/2TB drives in aggr create

scottgelb
5,320 Views

I have seen this before in older ONTAP and thought it had been fixed...but maybe it was never considered a BURT since an easy workaround...but we just noticed this behavior in ONTAP 8.0.1 GA.

1.    Customer has existing 1TB SATA drives in aggregates with 2 spares.
2.    They add 42x 2TB drives.  There now have 2x 1TB spares and 42x 2TB spares.
3.    They create an aggregate of 20x 2TB Drives to create archive volumes.    "aggr create aggrname -r 20 20"
4.    ONTAP chooses the two 2x 1TB and 18x 2TB disks for the aggr…leaving all 2TB spares.  The aggregate had a mix of 1/2TB disks and no 1TB spares for existing aggregates.  So now the 1TB aggregates have to spare and right size 2TB drives to 1TB...and not an optimal layout for the new aggregate.

An easy fix...We destroyed the aggregate and used the syntax "aggr create aggrname -r 20 20@1655" to force all the same drive size in each aggregate with spares of each drive size available.  Luckily this was a new aggregate, not an add to an existing aggregate...not a typical PSE issue since most use "-d x.xx.xx" or "@size" in the syntax...or for System Manager they could choose drive size..but for average users on the CLI, it would be nice if ONTAP would choose all the same drive size knowing only that many of that drive size are available.... especially when it has 42x 2TB to choose from..instead of assuming mixing drive sizes.  Not a big deal, but one that could cause a lot of hassle since users/customers may get themselves into mixed drive size aggregates with this behavior...and the only way to fix it is to migrate the data and destroy the mixed drive aggregate.

Anyone else notice this behavior or have issues with it?  Again, not a show stopper but have to be careful if customers use the CLI to create aggregates and they have mixed drive sizes.

14 REPLIES 14

BrendonHiggins
5,258 Views

Yep had the same problem before Christmas on a FAS3140 pair running 8.0P1.  Waiting for our next DS4243 purchase in order to fix it.  System was full of live data before anyone noticed what had happened.  😞

Bren

aborzenkov
5,258 Views

Folks, you two are probably in the best position to open a bug ☺ This has minor impact for new configuration (except additional couple of hours for zeroing disks); but for existing system it could cost quite a bit time and efforts.

borden
5,258 Views

Good afternoon all -

On a 6080 using 2TB SAS drives it will only let you create an aggregate of 88.8TB of Usable Space.  That would be 112TB raw and I'm looking to have 100TB of usable space per aggregate.  I already set the aggr nosnapdir on and aggr snapreserve to 0%.

Any other tricks I can try or is this the max?  This is using raid dp  raid group size 16.  Tried it with raid 20 and still got the same result.  71 drives equal 88.8TB usable space across 5 raid groups.  Again any tricks I can try please email me.  Scott.Borden@netapp.com.

Scott

scottgelb
4,828 Views

With a 100TB raw aggregate (right sized disk) regardless of the raid layout, the maximum number of 2TB data disks is 61.  Each data drive is about 1482GB after right sizing, 10% waf, .05% aggregate reserve, so the most usable you can get is about 88TB with zero aggregate and zero volume snap reserve.  We don't like running without an aggregate snap reserve (other posts on communities detail reasons to keep aggr reserve).  Until the 100TB raw limit (61 data disks) is increased there is no way to get more usable.

borden
4,828 Views

Thanks for your quick response.  Since they will run their own backups this customer is looking to get the most out of what they bought.  Another challange is this... They have several volumes that will grow very large in the next month or so.

These are 84TB volumes (leaving 5%) on the Aggr for upgrade eligability.  The manager here is wondering is there a way to grow a volume up to 300TB in size?  Would CNODE provide this?  I've never seen a volume stretched across multiple aggregates but if it can be done please let me know.

Thanks,

Scott

scottgelb
4,828 Views

C-Mode Striped volumes (Coral and Acro) are no longer supported (except for grandfathered Coral customers) so a single aggregate still is 100TB.  However, you can stitch junctions in the namespace... so you as long as no single directory is 88TB then you could  junction volumes in the namespace to look like one large 300TB mount (except each container won't be able to hold 300TB, but the entire namespace will with all containers)... you can put the volume in any directory order (nested, same level, etc.) in the / namespace from the vserver root.

borden
4,828 Views

Very very good information.  Would you happen to have documentation on this?  I'm just trying to figure out how that works by keeping the same

volume name.

Example:

/vol/heart/mo_p1     Say this has 8-10 NFS mounts on a fas6080 running with 2TB SAS drives.

What would be the best way to somehow to keep the same name so their mounts don't have to change on the back end?  Where would I go to construct/test this?

And the final question, is this a supported NetApp Function?  Again their goal is to have the same volume/qtree mount points on the backend being able to grow to 300TB for a single mount.

Scott

scottgelb
4,828 Views

the cmode docs do a good job covering namespace and junctions.  when you junction volumes, you will have to have a new directory for each volume in that junction.  Other workarounds you could use are symlinks and also the Transparent File Migration (TFM) released in 7.3.3 (D patch).. it is nfs v3 only.  I haven't seen it or a demo yet but it looks interesting for moving data around which might be a good fit for your description.

thomas_glodde
5,258 Views

With ONTAP 8.0.1 you have transparent volume migration at least tho, so you can create a 2nd aggregate and move all stuff over without additional downtime.

scottgelb
5,258 Views

Goof point.. A way to help fix it. But with 8.0.1 volume data motion is for san volumes only.. It isn't supported for nas yet. and assumes there are enough free drives for a new aggr to migrate to.

Typos Sent on Blackberry Wireless

thomas_glodde
5,258 Views

NAS should be placed in a vfiler anyway (multistore license comes in essential bundle with new machines now) which can be migrated online now as well 😉

scottgelb
5,258 Views

Vfilers cannot be migrated intra-node.. And data motion of a vfiler isn't supported between cluster pairs.

Typos Sent on Blackberry Wireless

thomas_glodde
5,258 Views

Yea, thats the bad point here, its not supported yet. For NAS i usualy go with a quick cifs shares -delete/-add and if im fast enough most users wont realize something went off/online. Can be a pain with locked files or vms and all that stuff tho.

scottgelb
5,258 Views

cifs won't handle a move of a network interface, even with a cluster takeover or giveback either... it is supposed to be fixed in smb 2.x where the durable file handle isn't at the tcp layer...hopefully soon.

Also, while I am a huge multistore proponent, there are adoption issues with no gui managment even in the new system manager... hopefully that is resolved before cluster mode converges with 7mode..

To the point of this issue, even with all the workarounds, customers with mixed drives should be aware of this issue and use "-d" or "@size" when creating or growing aggrs so they don't get stuck...we had this attached to a case back in an earlier 7.x release and don't remember if it was fixed, if a BURT was created, if it was was fixed, or if the issue is new again to 8.0.1... time for a new BURT or KB if one doesn't existing already.

Public