Subscribe

Re: 8.0.1 GA - mixed 1TB/2TB drives in aggr create

With a 100TB raw aggregate (right sized disk) regardless of the raid layout, the maximum number of 2TB data disks is 61.  Each data drive is about 1482GB after right sizing, 10% waf, .05% aggregate reserve, so the most usable you can get is about 88TB with zero aggregate and zero volume snap reserve.  We don't like running without an aggregate snap reserve (other posts on communities detail reasons to keep aggr reserve).  Until the 100TB raw limit (61 data disks) is increased there is no way to get more usable.

Re: 8.0.1 GA - mixed 1TB/2TB drives in aggr create

Thanks for your quick response.  Since they will run their own backups this customer is looking to get the most out of what they bought.  Another challange is this... They have several volumes that will grow very large in the next month or so.

These are 84TB volumes (leaving 5%) on the Aggr for upgrade eligability.  The manager here is wondering is there a way to grow a volume up to 300TB in size?  Would CNODE provide this?  I've never seen a volume stretched across multiple aggregates but if it can be done please let me know.

Thanks,

Scott

Re: 8.0.1 GA - mixed 1TB/2TB drives in aggr create

C-Mode Striped volumes (Coral and Acro) are no longer supported (except for grandfathered Coral customers) so a single aggregate still is 100TB.  However, you can stitch junctions in the namespace... so you as long as no single directory is 88TB then you could  junction volumes in the namespace to look like one large 300TB mount (except each container won't be able to hold 300TB, but the entire namespace will with all containers)... you can put the volume in any directory order (nested, same level, etc.) in the / namespace from the vserver root.

Re: 8.0.1 GA - mixed 1TB/2TB drives in aggr create

Very very good information.  Would you happen to have documentation on this?  I'm just trying to figure out how that works by keeping the same

volume name.

Example:

/vol/heart/mo_p1     Say this has 8-10 NFS mounts on a fas6080 running with 2TB SAS drives.

What would be the best way to somehow to keep the same name so their mounts don't have to change on the back end?  Where would I go to construct/test this?

And the final question, is this a supported NetApp Function?  Again their goal is to have the same volume/qtree mount points on the backend being able to grow to 300TB for a single mount.

Scott

Re: 8.0.1 GA - mixed 1TB/2TB drives in aggr create

the cmode docs do a good job covering namespace and junctions.  when you junction volumes, you will have to have a new directory for each volume in that junction.  Other workarounds you could use are symlinks and also the Transparent File Migration (TFM) released in 7.3.3 (D patch).. it is nfs v3 only.  I haven't seen it or a demo yet but it looks interesting for moving data around which might be a good fit for your description.