Network and Storage Protocols

Aggregate size for small scale implementations

brasbehlph
4,583 Views

I currently have a FAS2040 + DS14 that I will be using for NFS/VMware  and would like to get some recommendations on aggregate size and  planning for a future purchase of an extra controller + shelf.


Below is the default configuration the Netapp tech setup a few months ago.

12 x 1TB 7200 disk in unit (SATA)

14 x 1TB 7200 disk on shelf (FC-ATA)

1 raid group

aggr0 with 3 disk in unit for system root

aggr1 with 7 disk in unit (2 spare)

aggr2 with 11 disk on shelf (3 spare)

Since these are only 7200 SATA disks, I would like to configure for the best possible performance.

Should I change the configuration and combine aggr1 + aggr2 into one aggregate for better performance?

Is it recommended to mix the disks on the unit and the external shelf in an aggregate even though they are both SATA?

Keep in mind that I would like to expand this setup without  having to move data around when I purchase another controller + shelf in  the future.

Thanks,

B

1 ACCEPTED SOLUTION

ekashpureff
4,583 Views

Ken -

You're correct with this thinking...

The consideration is storage utilization versus the very small risk of corrupting the aggregate WAFL file system.

From the Data ONTAP 'Storage Management Guide' (p 164):

The following are additional facts and considerations if the root volume is on a disk shelf:
• Data ONTAP supports two levels of RAID protection, RAID4 and RAID-DP. RAID4 requires a
minimum of two disks and can protect against single-disk failures. RAID-DP requires a minimum
of three disks and can protect against double-disk failures. The root volume can exist as the
traditional stand-alone two-disk volume (RAID4) or three-disk volume (RAID-DP).
Alternatively, the root volume can exist as a FlexVol volume that is part of a larger hosting
aggregate.
• Smaller stand-alone root volumes offer fault isolation from general application storage. On the
other hand, FlexVol volumes have less impact on overall storage utilization, because they do not
require two or three disks to be dedicated to the root volume and its small storage requirements.
• If a FlexVol volume is used for the root volume, file system consistency checks and recovery
operations could take longer to finish than with the two- or three-disk traditional root volume.
FlexVol recovery commands work at the aggregate level, so all of the aggregate's disks are
targeted by the operation. One way to mitigate this effect is to use a smaller aggregate with only a
few disks to house the FlexVol volume containing the root volume.
• In practice, having the root volume on a FlexVol volume makes a bigger difference with smaller
capacity storage systems than with very large ones, in which dedicating two disks for the root
volume has little impact.
• For higher resiliency, use a separate two-disk root volume.
Note: You should convert a two-disk root volume to a RAID-DP volume when performing a
disk firmware update, because RAID-DP is required for disk firmware updates to be
nondisruptive. When all disk firmware and Data ONTAP updates have been completed, you
can convert the root volume back to RAID4.
For Data ONTAP 7.3 and later, the default RAID type for traditional root volume is RAID-DP.
If you want to use RAID4 as the raid type for your traditional root volume to minimize the
number of disks required, you can change the RAID type from RAID-DP to RAID4 by using
vol options vol0 raidtype raid4.

Given the larger drive sizes we have these days and max sized aggregates as the norm I'd be more

worried about my data being unavailable than the root volume. An alternative I've discussed in classes

is to create an alternative root on a second aggregate that could be booted off of...

For protection against a wafl_iron scenario being painful you may also wish to review an earlier

discussion regarding aggregate snap shots and snap reserve:

http://communities.netapp.com/message/41806

I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
NetApp Instructor and Independent Consultant
http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

View solution in original post

7 REPLIES 7

ekashpureff
4,583 Views

B -

Yes, you should combine aggr0 and aggr1.

It makes no sense at all to me that some techs will deploy a dedicated aggr0 for vol0.

It's  a waste of disk space.


I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
NetApp Instructor and Independent Consultant
http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

brasbehlph
4,583 Views

Thanks for the quick response, so based on your reply I would have the following?

aggr0 = 11 x 1 TB disks + 1 spare located on the unit.

aggr1 = 13 x 1 TB disks + 1 spare located on the shelf

This would keep the disks separate and maximize the number of spindles and space.

ekashpureff
4,583 Views

Sounds like a very reasonable plan to me...


I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
NetApp Instructor and Independent Consultant
http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

rodrigon
4,583 Views

I agree with Eugene. It's a waste of disk space when you create a distinct aggregate to vol0.

IMHO, thinking in future migrations, you shouldn't mix internal disks with shelves disks.

See you!

Nasc

NetApp - Enjoy it!

ken_foster
4,583 Views

wasn't the main reason to seperate out Vol0 from the data volumes due to WAFL_iron or WAFL_check runs?  my understanding was that during the run the aggregate was unavailable and since the root volume is now contained in that aggregate, with the data, the entire system is now unusable for the run time.

but, if you had a separate root volume and a data volume needed a run, the other aggregates were still accessible.   I do agree that its a waste of space though.  and i also agree the risk is small.  But i prefer completely informed decisions.

Am I off base here?  or is this old thinking?

ekashpureff
4,584 Views

Ken -

You're correct with this thinking...

The consideration is storage utilization versus the very small risk of corrupting the aggregate WAFL file system.

From the Data ONTAP 'Storage Management Guide' (p 164):

The following are additional facts and considerations if the root volume is on a disk shelf:
• Data ONTAP supports two levels of RAID protection, RAID4 and RAID-DP. RAID4 requires a
minimum of two disks and can protect against single-disk failures. RAID-DP requires a minimum
of three disks and can protect against double-disk failures. The root volume can exist as the
traditional stand-alone two-disk volume (RAID4) or three-disk volume (RAID-DP).
Alternatively, the root volume can exist as a FlexVol volume that is part of a larger hosting
aggregate.
• Smaller stand-alone root volumes offer fault isolation from general application storage. On the
other hand, FlexVol volumes have less impact on overall storage utilization, because they do not
require two or three disks to be dedicated to the root volume and its small storage requirements.
• If a FlexVol volume is used for the root volume, file system consistency checks and recovery
operations could take longer to finish than with the two- or three-disk traditional root volume.
FlexVol recovery commands work at the aggregate level, so all of the aggregate's disks are
targeted by the operation. One way to mitigate this effect is to use a smaller aggregate with only a
few disks to house the FlexVol volume containing the root volume.
• In practice, having the root volume on a FlexVol volume makes a bigger difference with smaller
capacity storage systems than with very large ones, in which dedicating two disks for the root
volume has little impact.
• For higher resiliency, use a separate two-disk root volume.
Note: You should convert a two-disk root volume to a RAID-DP volume when performing a
disk firmware update, because RAID-DP is required for disk firmware updates to be
nondisruptive. When all disk firmware and Data ONTAP updates have been completed, you
can convert the root volume back to RAID4.
For Data ONTAP 7.3 and later, the default RAID type for traditional root volume is RAID-DP.
If you want to use RAID4 as the raid type for your traditional root volume to minimize the
number of disks required, you can change the RAID type from RAID-DP to RAID4 by using
vol options vol0 raidtype raid4.

Given the larger drive sizes we have these days and max sized aggregates as the norm I'd be more

worried about my data being unavailable than the root volume. An alternative I've discussed in classes

is to create an alternative root on a second aggregate that could be booted off of...

For protection against a wafl_iron scenario being painful you may also wish to review an earlier

discussion regarding aggregate snap shots and snap reserve:

http://communities.netapp.com/message/41806

I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
NetApp Instructor and Independent Consultant
http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

brasbehlph
4,583 Views

Good point Ken.

Is there really a need to have two spares per disk type for this implementation based on best practice or is have 1 ok.

Also, since this filer is not in production I will have some time to test the different configurations. Is there any benchmarks to refer to for comparison of disk write speeds for a mounted nfs export?

I do know there are many variables like iops,cpu,network,raid,... can produce different results, I would like to get some ballpark numbers calculated.

For instance, of the GigE theoretic 125MB/s network throughput, how does ~80MB/s using dd rank?

Thanks Again,

B

Public