ONTAP Hardware

Disks being downsized permanently

JimKusznir
5,660 Views

Hello all:

I recently was given several old filers, and now I'm trying to set them up in a most usable state. It appears I've gotten everything more or less set with the exception of one thing.

On our F840 filer, I have 2 FCAL loops with 4 shelves each. One set of 4 consists of 36GB disks, the other set 18GB. I repacked 2 of the 36GB shelves with 36GB disks we acquired some time ago. 6 of those disks came up as "zoned/block" in the disk list; the rest of the disks in the filer were only zoned. Early on, this resulted in making my root aggregate blocked, and at one point, it changed the size of the 36GB disks to 18GB. This only effected the 36GB zoned/block disks.

I have since rebuilt the filer (boot menu, zero all disks and make a flexible volume) with those disks not present, so it built it on zoned disks. I've tried everything I can, but I can't seem to get those 6 disks to revert back to their 36GB state.

The best view I've found of the problem is the web management disk page, but this also shows it:

[excerpt from sysconfig -av]

0: IBM DRHL36L 2748 34.5GB ( 72170880 512B/sect)
1: SEAGATE ST336704FC 0005 17.0GB ( 71687367 512B/sect)
2: SEAGATE ST336704FC 0005 17.0GB ( 71687367 512B/sect)

3: SEAGATE ST336704FC 0005 17.0GB ( 71687367 512B/sect)
4: SEAGATE ST336704FC 0005 17.0GB ( 71687367 512B/sect)
5: SEAGATE ST336704FC 0005 17.0GB ( 71687367 512B/sect)
6: SEAGATE ST336704FC 0005 17.0GB ( 71687367 512B/sec

For those that don't know Seagate model nomenclature, the size of the seagate disks above are 36.7GB.

In my web manager view, it shows:

8.1 spare zoned/block 0 1 FC:A 17 GB 30 GB Pool0
8.2 spare zoned/block 0 2 FC:A 17 GB 30 GB Pool0
8.3 spare zoned/block 0 3 FC:A 17 GB 30 GB Pool0
8.4 spare zoned/block 0 4 FC:A 17 GB 30 GB Pool0
8.5 spare zoned/block 0 5 FC:A 17 GB 30 GB Pool0
8.6 spare zoned/block 0 6 FC:A 17 GB 30 GB Pool0
8.8 dparity zoned 1 0 FC:A 34 GB 34 GB Pool0 aggr0

This leaves me doubly confused: why is it 30GB physical instead of 34GB, and why is the netapp insisting theses are only good for 17GB?

Although I don't have it documented, I could have sworn the first time I brought it up with these disks installed, it did show them as 34GB, just zoned/block instead of straight zoned.

Is there anything I can do to revert these disks back? If I pull them and put them in a unix system and dd /dev/random over them, would that do the trick? Any other magic?

Thanks!

--Jim

4 REPLIES 4

BrendonHiggins
5,660 Views

Found this from the foundation course:

Right Sizing

Some disks might have a little more capacity depending on the manufacturer or model.  Data ONTAP will "right size" the disk and make all the usable disk space the same. Disk drives from different manufacturers may differ slightly in size even though they belong to the same size category. Right sizing ensures that disks are compatible regardless of manufacturer.

Data ONTAP right sizes disks to compensate for different manufacturers producing different raw-sized disks. When you add a new disk, Data ONTAP reduces the amount of space on that disk available for user data by rounding down. This maintains compatibility across disks from various manufacturers. The available disk space listed by informational commands such as sysconfig is, therefore, less for each disk than its rated capacity. The available disk space on a disk is rounded down as shown in the table in the Storage Management Guide, reprinted above. 

*Note:* Although automatic disk right sizing is not applied to existing disks in an upgraded system, it is applied to disks being added to the storage system. Use sysconfig -r to compare the physical space and the usable space, and to determine whether disks are right-sized.

Disk Size Right-sized Capacity Available Blocks

36GB 34.5GB 70,656,000

The block size is correct for a 36Gb HDD so I think it is comething to do with the 'format'. Start thinking WALF and aggregate. What happens is you remove the disk and then re-install to a new aggregate?

JimKusznir
5,660 Views

I actually found the info cited already; the problem was when it was downsizing it needlessly. It doesn't look like tis wafl eating some disk space (as it was eating a *lot* of disk space).

It turned out to be a combination of the filer refusing to update the firmware on the disks (even though it was supposed to) and old/incorrect "disk labels" on the disk.

parisi
5,660 Views

Jim,

You could try to fail them, then unfail them:

disk fail diskname

priv set advanced

disk unfail disk name

What happens is, when a disk gets inserted into a RAID group, if it is larger or a higher speed, it will size down to the other disks in the RAID group. If you have 36 GB disks in a RAID group with 18GB disks, then they will all be 18GB disks.

If the disks are in the RAID group, you will need to run disk replace to get them out, or destroy the aggr/tradvol.

JimKusznir
5,660 Views

Hi:

Thanks, your commands were the magic. I saw a few references to disk unfail <disk_ID>, but I could never actually get that command to "exist"...I didn't know about the priv command.

What I actually needed to do was to make a new disk label on them, so the fail / unfail command does just that.

--Jim

Public