ONTAP Discussions

ONTAP silently reverting disk partition operation after a few minutes. Huh?

HUX20002000
1,390 Views

System info: FAS2720 running ONTAP 9.7P17, with 2 x DS212C expansion shelves with 3.5" 10 TB SAS disks. 8 of the disks on the base unit are root/data partitioned disks and are hosting RAID-TEC root aggregates. The expansion shelves have single-partitioned disks.

 

What I want to do: make a data aggregate out of all the disks in order to maximize spindles. All the expansion shelf disks are showing as spare and owned by node 1, with their root and data fields showing as unowned. By default, I can't create that aggregate because the 8 base unit disks hosting the root aggregates have a different partition scheme than the rest. Therefore, I'm trying to make the other disks have the same root/data partitioning as those 8 disk. But it's not working.

 

After disabling disk autoassign, here's what I'm doing for the first disk :

::> storage disk create-partition -source-disk 1.0.1 -target-disk 2.0.1

 

This initially works: "storage disk show ..." shows 2.0.1 with root/data partitions owned by node 1. If I switch to the aggregate creation GUI, I now see one more disk in the relevant row than I did prior. All good. However, after a few minutes, if I do "storage disk show ..." again, I see that 2.0.1 has lost its root/data ownership. ONTAP appears to have silently reverted my "storage disk create-partition ..." command.

 

To test this, I did "storage disk create-partition ..." on all the relevant disks as quickly as possible, then created an aggregate, which worked. However, after that, "storage disk show ..." showed nothing in the aggregate field for those disks. Also, while I was able to create volumes on it via the CLI (after assigning it to the correct SVM), I was unable to in the GUI because the aggregate didn't appear in the list.

 

I'm stumped at this point! Anyone?

1 REPLY 1

TMACMD
1,338 Views

Yeah, something I have seen and not like also. I think this was added in 9.6 or 9.7 and still exists up to at least 9.9.1 and likely beyond. Here is something to ponder....

 

Personally, I would do this:

1. Upgrade to ONTAP 9.9.1P10 or 9.10.1P4

2. Use FlexGroups!

 

FlexGroups is a method to distribute NAS data over multiple controllers and multiple aggregates.

 

In your case, it sounds like you have 36 drives (12 internal + 24 external). You are limited to a MAX of 29 drives in a RAID-TEC aggregate. Remember, the larger the RAID group, the longer (much longer) it takes to rebuild a drive. I tend to stay closer to 20 using RAID-TEC. I would end up with something like this as a base:

 

12 Partitioned drives using Root-Data partitioning. 24 Whole disks (no partitions yet) 

6 Root partitions on Node 1 and 6 root partitions on Node 2

 

I am not a fan of the gui when making aggregates. Although more recently it has become better, I still use the CLI for nearly everything (more control).

 

  1. Note which disks are partitioned
    • storage aggregate show-spare-disk
  2. Create an aggregate on each node starting with the six partitioned disks plus one WHOLE disk. ONTAP will auto-partition a drive to be added to the aggregate as needed. A new raid-group will use whole drives. Anything in the partitioned raid-group will be partitioned!
    • aggr create xxx01 -node node-0x -disklist 1.0.0, 1.0.2, 1.0.4, 1.0.6, 1.0.8, 1.0.10, 1.1.0 -raid-type raid_tec -maxraidsize 16
    • aggr create xxx02 -node node-0x -disklist 1.0.1, 1.0.3, 1.0.5, 1.0.7, 1.0.9, 1.0.11, 1.1.1 -raid-type raid_tec -maxraidsize 16
  3. Add disks to each aggregate. Like before, since the RAID-Group is using partitioned drives, any drives added to the RAID-group will be partitioned also
    1. aggr add-disks -aggr xxx01 -diskcount  9 
    2. aggr add-disks -aggr xxx02 -diskcount  9 
  4. Now you have two aggregates, same size, one on each node. If using 36 drives, you should have 2 spares left on each node. If you have fewer, then ONTAP will complain regularly.

When using spinning drives, ONTAP wants you keep to keep 2 spares per controller. When using SSD/NVMe drives, ONTAP will let you use one.

 

Now....what about a failure: ONTAP will auto-partition any drive it needs to replace a failed drive!

 

Now create a flexgroup and let your data distribute across both nodes and aggregates and improve your NAS performance! Note: your Flexgroup should be at least 800GB in size. Minimum Flexgroup members are 100GB each. A normal flexgroup will have 4 members per aggregate.

 

Use the GUI or something like this:

vol create -volume myflexgroup -aggr-list xxx01,xxx02 -aggr-list-mult 4 -size 1T -juntion-path /myflexgroup -space-guarantee none 

 

This will make an 8 member FlexGroup called myflexgroup with each member, thin provisioned at about 128G each mounted in the ONTAP namespace at /myflexgroup

Public