ONTAP Hardware

Assign disk to node

SVHO
14,744 Views


Lets say I decided to remove my aggregate from node called mynode2.  How do I assign those free FSAS drives to mynode1?  Below syntax looks right?

 

 

 

 

storage disk assign -type FSAS -node mynode1

 

Currently I have 2 aggregates but looking to to remove the 2nd aggregate and just combine them to one.  The aggregate I am removing isn't the root.

 

Thanks,

SVHO

1 ACCEPTED SOLUTION

AlexDawson
14,707 Views

Almost. You can only assign unassigned disks, so you need to unassign them first, but if you have autoassign on, you need to turn that off first, or the original owner will just take them back. so:

 

  1. run "storage disk option show" to see the current autoassign and autoassign-shelf values. Since you're dealing with individual disks, autoassign-shelf can be on or off, but autoassign should be off for this operation. It should be off in general if the disk ownership isn't in the patterns for autoassign disk, which it sounds like it won't be, but remember if you replace a failed disk, you will need to assign the replacement to the system that had the failed one
  2. Next run "storage disk removeowner" on the disks to be moved
  3. Finally, yes, run "storage disk assign" with the arguements you want to assign the disks to their new homes
  4. After the disks are owned by the correct system, you can add them to an aggregate - if they've been used before, they will need to be zero'ed, which may take some time. Or, you can run "storage disk zerospares" before adding them to the aggregate

Hope this helps!

View solution in original post

3 REPLIES 3

AlexDawson
14,708 Views

Almost. You can only assign unassigned disks, so you need to unassign them first, but if you have autoassign on, you need to turn that off first, or the original owner will just take them back. so:

 

  1. run "storage disk option show" to see the current autoassign and autoassign-shelf values. Since you're dealing with individual disks, autoassign-shelf can be on or off, but autoassign should be off for this operation. It should be off in general if the disk ownership isn't in the patterns for autoassign disk, which it sounds like it won't be, but remember if you replace a failed disk, you will need to assign the replacement to the system that had the failed one
  2. Next run "storage disk removeowner" on the disks to be moved
  3. Finally, yes, run "storage disk assign" with the arguements you want to assign the disks to their new homes
  4. After the disks are owned by the correct system, you can add them to an aggregate - if they've been used before, they will need to be zero'ed, which may take some time. Or, you can run "storage disk zerospares" before adding them to the aggregate

Hope this helps!

SVHO
14,684 Views

Thanks for replying.  I went into the command line and peformed the folliowing in the training lab..

 

 

I noticed the disk NET-1.5 shows as "unknown" under container type.  Is that normal?  Also, after issuing the zerospare command the zeroing (%) status on the indivual disk shows -N/A.

 

When I go into the aggregate in the GUI and tried to add the disk, I don't see an additional available disk.  However it does show that it belongs to the correct homeowner & current owner..  Attached a screenshot.  Maybe because the nodes are not configured as HA on the lab?

 

 

 

storage disk option modify -node svl-nau-02  -autoassign off

 

storage disk option modify -node svl-nau-01  -autoassign off

storage disk assign removeowner -disk NET-1.5 (after running this it shows "unassigned" under container type)



storage disk assign -disk NET-1.5 -owner svl-nau-02 (after running this it shows "unknown" under container type).

storage disk zerospares -owner svl-nau-02

AlexDawson
14,551 Views

I don't quite know how you have your simulator configured, which makes this a bit difficult to answer. If the simulator isn't HA, then that would be why.

 

This is a HA setup:

 

 

cluster1::> storage failover show
Takeover
Node Partner Possible State Description
-------------- -------------- -------- -------------------------------------
cluster1-01 cluster1-02 true Connected to cluster1-02
cluster1-02 cluster1-01 true Connected to cluster1-01

 

 

 

If it doesn't list partners, then it isn't HA

Public