ONTAP Hardware

How to change default number of spare drives for filer

danaskazlauskas
9,313 Views

Hi folks,

NetApp FAS systems must have two spare drivers by default for each filer - as I understand, this requirement is set in Data ONTAP.

How to force the system to settle just for one spare drive for filer? It's very annoying to lose usable space of two disks, especially taking into account that in small FAS installations there is no sense to have two spare disks per filer.

Thanks.

BR,

Danas

1 ACCEPTED SOLUTION

BrendonHiggins
9,313 Views

Not sure why you are having issues.  The default raid group size should be 16 - http://now.netapp.com/NOW/knowledge/docs/ontap/rel7261/html/ontap/smg/provisioning/reference/r_oc_prov_raid-groups-sizes.html#r_oc_prov_raid-groups-si...

If your raid group size has changed to 13 the filer would not let you create a RAID-DP aggregate with 14 or 15 disks, as you would only have parity disks in the 2nd raid group.  If you increase the raid group size and then add the disk, it should work.

I find with strange errors like this, it is always worth having a console session open on the filer as it will display messages which may not be reported by filerview or recorded in the logs.

Bren

View solution in original post

8 REPLIES 8

aborzenkov
9,313 Views

Actually, default is 1. 2 are recommended for maintenance center to be functional, but it is not hard requirement. Of course, pretty GUIs could force you to something else …

On NetApp itself it is controlled by option raid.min_spare_count

danaskazlauskas
9,313 Views

Yes, you are right, default is 1 (command raid.min_spare_count shows that), but nevertheless FilerView does not allow to use last two disks per each filer for data - how to overcome this issue?

Thanks.

BR,

Danas

vmsjaak13
9,313 Views

Perhaps your raidgroup is full ?

BrendonHiggins
9,313 Views

It may be the disk is in use by the disk maintenance center.  http://now.netapp.com/NOW/knowledge/docs/ontap/rel7261/html/ontap/smg/provisioning/concept/c_oc_prov_disk-health.html#c_oc_prov_disk-health

More likley however that the raid group is full or the spare disk is 'different' to the disks in the aggregate you wish to grow.

Note however, a common issue to kill filer performance, is to fill an aggregate and then add a single HDD to the aggregate.  All I/O writes must go to this single disk until it is at the same level as the other disks in the aggregate.  The problem will presist as all the new data blocks have been written to a single disk, so they must be read from a single disk.

If your aggregate has snap reserve set at 5% (default) and multiple volumes.  Remove the reserve as you are very unlikley to need it and it is wasted space.

If you still want to add the disk use CLI as filerview can fail for unknow reasons while CLI will work..

Add the disks by entering the following command:

aggr add aggr_name [-f] [-n] {ndisks[@disk-size] | [-d 
disk1 [disk2 ...] [disk1 [disk2 ...] ] }

aggr_name is the name of the aggregate to which you are adding the disks.

-f overrides the default behavior that does not permit disks  in a plex to span disk pools (only applicable if SyncMirror is  licensed). This option also allows you to mix disks with different  speeds.

-n displays the results of the command but does not execute  it. This is useful for displaying the disks that would be automatically  selected prior to executing the command.

ndisks is the number of disks to use.

disk-size is the disk size, in gigabytes, to use. You must have at least ndisks available disks of the size you specify.

-d specifies that the disk-name will follow. If the aggregate is mirrored, then the -d argument must be used twice (if you are specifying disk-names).

disk-name is the disk number of a spare disk; use a space to separate disk numbers. The disk number is under the Device column in the aggr status -s display.

Hope it helps.

Good luck

Bren

mcope
9,313 Views

Agree with everything above, except please DO NOT remove the aggregate snapshot reserve.  Set it to 2 or 3 percent but never to zero.  Data ONTAP 7.3 and  8.0 have moved a lot of stuff out of the volumes and into the aggregate free space (especially dedupe).  They also require 3% aggregate free space during ONTAP upgrades.  Many customers are having issues because their aggregates are nearly full and they have no buffer space because they set snapshot reserve to 0.

danaskazlauskas
9,313 Views

Hi Brendon,

thanks for the answer, but my situation was slightly different - I had the new system with 36 300 GB SAS disks. I distributed them evenly between two filers (18 disks for each filer, 3 of them - for Data ONTAP), and decided to create one aggregate for data for each filer. As I wrote before, I was able to create aggregate consisting only of 13 disks, because 2 disks were reserved as spares and FilerView did not allow to create aggregate of 14 disks leaving only one spare disk.

As I understood, is it possible to force system to create larger aggregate leaving only one spare disk per filer using CLI?

Thanks.

BR,

Danas

BrendonHiggins
9,314 Views

Not sure why you are having issues.  The default raid group size should be 16 - http://now.netapp.com/NOW/knowledge/docs/ontap/rel7261/html/ontap/smg/provisioning/reference/r_oc_prov_raid-groups-sizes.html#r_oc_prov_raid-groups-si...

If your raid group size has changed to 13 the filer would not let you create a RAID-DP aggregate with 14 or 15 disks, as you would only have parity disks in the 2nd raid group.  If you increase the raid group size and then add the disk, it should work.

I find with strange errors like this, it is always worth having a console session open on the filer as it will display messages which may not be reported by filerview or recorded in the logs.

Bren

danaskazlauskas
9,313 Views

Thanks Brendon.

If I will have similar problems in the future, I will open a new thread with screenshots and logs.

BR,

Danas

Public