ONTAP Hardware

can ADP container ownership be different with root parition

NetappNewbie-Roy
5,367 Views

I have a 2-node switchless cluster. I would like to shut it down. I disable the cluster.  I have disk ownership as shown. once I halt both node and boot them. I found the following result. The controller that does not own the disk container whre its root partition reside has no root volume to boot up. BTW I wonder why disk is reserved  and "waiting for giveback" while cluster HA is disabled.

 

::> disk show -partition-ownership
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------

Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
1.0.0
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-02 NetappClu01-02 538013735 538013735
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.1
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-01 NetappClu01-01 538012598 538012598
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.2
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-02 NetappClu01-02 538013735 538013735
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.3
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-01 NetappClu01-01 538012598 538012598
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.4
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-02 NetappClu01-02 538013735 538013735
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.5
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-01 NetappClu01-01 538012598 538012598
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.6
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-02 NetappClu01-02 538013735 538013735
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.7
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-01 NetappClu01-01 538012598 538012598
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.8
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-02 NetappClu01-02 538013735 538013735
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.9
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-01 NetappClu01-01 538012598 538012598
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.10
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-02 NetappClu01-02 538013735 538013735
Data NetappClu01-01 NetappClu01-01 538012598 538012598

1.0.11
Container NetappClu01-01 NetappClu01-01 538012598 538012598
Root NetappClu01-01 NetappClu01-01 538012598 538012598
Data NetappClu01-01 NetappClu01-01 538012598 538012598

12 entries were displayed.

 

Controller A result:

Feb 21 11:13:45 [NetappClu01-01:cf.fmns.skipped.disk:notice]: While releasing the reservations in "Waiting For Giveback" state Failover Monitor Node State(fmns) module skipped the disk 0b.00.3 that is owned by 538012598 and reserved by 538013735.
Waiting for reservations to clear
Waiting for reservations to clear
Feb 21 11:14:59 [NetappClu01-01:sas.link.error:error]: Could not recover link on SAS adapter 0a after 45 seconds. Offlining the adapter.
Feb 21 11:16:16 [NetappClu01-01:config.invalid.PortToPort:error]: SAS adapter "0a" is attached to another SAS adapter.
Feb 21 11:17:11 [NetappClu01-01:sas.link.error:error]: Could not recover link on SAS adapter 0a after 45 seconds. Offlining the adapter.
Waiting for reservations to clear
Waiting for reservations to clear

 

 

 

Controller B result:

Feb 21 03:14:13 [localhost:raid.assim.tree.noRootVol:error]: No usable root volume was found!
Uptime: 2m8s
System rebooting...

 

 

 

 

1 ACCEPTED SOLUTION

SpindleNinja
5,262 Views

"I suspect OP confused HA and storage failover." Yep, thanks for catching that.   (no more responding to posts late for me).     

 

https://kb.netapp.com/app/answers/answer_view/a_id/1003836/~/what-is-the-procedure-for-graceful-shutdown-and-power-up-of-a-storage-system   

 

And yeah,  now that I think about it you only disable it really when you grow from 2 to 4 and that's after the two new nodes are added. 

View solution in original post

5 REPLIES 5

SpindleNinja
5,354 Views

Can you clarify a few things?  

 

 "I would like to shut it down."  - You would like to shut it down, or you did shut it down?   

- If you did shut it down, what steps did you take? 

 

"I disable the cluster." - What do you mean?  You disabled Cluster HA? 

 

what ONTAP version? 

NetappNewbie-Roy
5,316 Views

I want to conduct power maintenance. I have to shudtdown the 2-node cluster. so I disable the cluster HA by command "cluster ha modify –configured false". Then I halt both controller into loader mode. Then both node cannot boot up. you can see result in the first post.

SpindleNinja
5,312 Views

Sounds like you followed the correct steps,  This isn't normal behaivor either.   

 

I would open a P1 with support and work through it with them rather than the forums, ( I wouldn't want you to loose any data).  

 

aborzenkov
5,295 Views

That’s not the normal steps at all. You never need to disable HA to power off systems (in general you should never disable HA on two node cluster unless following well defined procedure). I suspect OP confused HA and storage failover.

 

Because storage failover was not inhibited, one node is in failover. Now if it is also node that has epsilon (HA is disabled), this is really a problem. I agree, better open support case, it goes above normal forum level.

SpindleNinja
5,263 Views

"I suspect OP confused HA and storage failover." Yep, thanks for catching that.   (no more responding to posts late for me).     

 

https://kb.netapp.com/app/answers/answer_view/a_id/1003836/~/what-is-the-procedure-for-graceful-shutdown-and-power-up-of-a-storage-system   

 

And yeah,  now that I think about it you only disable it really when you grow from 2 to 4 and that's after the two new nodes are added. 

Public