Subscribe
Accepted Solution

Disk state showing "partner" but one 1 controller is present.

Hello Everyone,

 

I am having FAS 3250 i run 4a  "(4) Clean configuration and initialize all disks.". After completion of this one of the disk shelf disks showing in PARTNER stat.

 

Storage only have single controller. Its not allowing to change the ownership of  disks.

disk show
DISK OWNER POOL SERIAL NUMBER HOME
------------ ------------- ----- ------------- -------------
0b.12.13 Netapp-GNOC002(2017122871) Pool0 KZG404DD Netapp-GNOC002(2017122871)
0b.12.1 Netapp-GNOC002(2017122871) Pool0 KZG3YLED Netapp-GNOC002(2017122871)
0b.12.23 Netapp-GNOC002(2017122871) Pool0 KZG3SN2D Netapp-GNOC002(2017122871)
0b.12.2 Netapp-GNOC002(2017122871) Pool0 KZG3Z2JD Netapp-GNOC002(2017122871)
0b.12.18 Netapp-GNOC002(2017122871) Pool0 KZG3ZT9D Netapp-GNOC002(2017122871)
0b.12.7 Netapp-GNOC002(2017122871) Pool0 KZG3XX3D Netapp-GNOC002(2017122871)
0b.12.19 Netapp-GNOC002(2017122871) Pool0 KZG3YW7D Netapp-GNOC002(2017122871)
0b.12.4 Netapp-GNOC002(2017122871) Pool0 KZG3VD0D Netapp-GNOC002(2017122871)
0b.12.20 Netapp-GNOC002(2017122871) Pool0 KZG3ZSUD Netapp-GNOC002(2017122871)
0b.12.12 Netapp-GNOC002(2017122871) Pool0 KZG3YYZD Netapp-GNOC002(2017122871)
0b.12.5 Netapp-GNOC002(2017122871) Pool0 KZG405JD Netapp-GNOC002(2017122871)
0b.12.6 Netapp-GNOC002(2017122871) Pool0 KZG3XXDD Netapp-GNOC002(2017122871)
0b.12.15 Netapp-GNOC002(2017122871) Pool0 KZG3XVDD Netapp-GNOC002(2017122871)
0b.12.9 Netapp-GNOC002(2017122871) Pool0 KZG3T4BD Netapp-GNOC002(2017122871)
0b.12.21 Netapp-GNOC002(2017122871) Pool0 KZG3XM3D Netapp-GNOC002(2017122871)
0b.12.3 Netapp-GNOC002(2017122871) Pool0 KZG3Z07D Netapp-GNOC002(2017122871)
0b.12.0 Netapp-GNOC002(2017122871) Pool0 KZG3Z2XD Netapp-GNOC002(2017122871)
0b.12.17 Netapp-GNOC002(2017122871) Pool0 KZG3YRDD Netapp-GNOC002(2017122871)
0b.12.8 Netapp-GNOC002(2017122871) Pool0 KZG3YP6D Netapp-GNOC002(2017122871)
0b.12.16 Netapp-GNOC002(2017122871) Pool0 KZG3Y86D Netapp-GNOC002(2017122871)
0b.12.11 Netapp-GNOC002(2017122871) Pool0 KZG473XD Netapp-GNOC002(2017122871)
0b.12.14 Netapp-GNOC002(2017122871) Pool0 KZG3Z8ED Netapp-GNOC002(2017122871)
0b.12.22 Netapp-GNOC002(2017122871) Pool0 KZG3ZH2D Netapp-GNOC002(2017122871)
0b.11.17 oinbana002-b(2017199047) Pool0 KZG2Y9GD oinbana002-b(2017199047)
0a.11.0 oinbana002-b(2017199047) Pool0 KZG868VD oinbana002-b(2017199047)
0a.11.16 oinbana002-b(2017199047) Pool0 KZG49U9D oinbana002-b(2017199047)
0a.11.12 oinbana002-b(2017199047) Pool0 KZG6J9HD oinbana002-b(2017199047)
0a.11.10 oinbana002-b(2017199047) Pool0 KZG8EERD oinbana002-b(2017199047)
0b.11.19 oinbana002-b(2017199047) Pool0 KZG6JG4D oinbana002-b(2017199047)
0a.11.8 oinbana002-b(2017199047) Pool0 KZG8JK4D oinbana002-b(2017199047)
0b.11.13 oinbana002-b(2017199047) Pool0 KZG6JGTD oinbana002-b(2017199047)
0a.11.14 oinbana002-b(2017199047) Pool0 KZG5VA8D oinbana002-b(2017199047)
0a.11.4 oinbana002-b(2017199047) Pool0 KZG6UZKD oinbana002-b(2017199047)
0a.11.22 oinbana002-b(2017199047) Pool0 KZG5WNHD oinbana002-b(2017199047)
0a.11.2 oinbana002-b(2017199047) Pool0 KZG8B6YD oinbana002-b(2017199047)
0a.11.18 oinbana002-b(2017199047) Pool0 KZG6DJZD oinbana002-b(2017199047)
0b.11.9 oinbana002-b(2017199047) Pool0 KZG8JG9D oinbana002-b(2017199047)
0b.11.3 oinbana002-b(2017199047) Pool0 KZG88M5D oinbana002-b(2017199047)
0a.11.6 oinbana002-b(2017199047) Pool0 KZG8GWND oinbana002-b(2017199047)
0a.11.20 oinbana002-b(2017199047) Pool0 KZG3AN8D oinbana002-b(2017199047)
0b.11.15 oinbana002-b(2017199047) Pool0 KZG4AHDD oinbana002-b(2017199047)
0b.11.23 oinbana002-b(2017199047) Pool0 KZG4AGRD oinbana002-b(2017199047)
0b.11.5 oinbana002-b(2017199047) Pool0 KZG7XGVD oinbana002-b(2017199047)
0b.11.1 oinbana002-b(2017199047) Pool0 KZG8J44D oinbana002-b(2017199047)
0b.11.21 oinbana002-b(2017199047) Pool0 KZG6J35D oinbana002-b(2017199047)
0a.10.3 Netapp-GNOC002(2017122871) Pool0 KZG8J8BD Netapp-GNOC002(2017122871)
0b.11.7 oinbana002-b(2017199047) Pool0 KZG8AGAD oinbana002-b(2017199047)
0a.10.9 Netapp-GNOC002(2017122871) Pool0 KZG8BEUD Netapp-GNOC002(2017122871)
0b.11.11 oinbana002-b(2017199047) Pool0 KZG8HT4D oinbana002-b(2017199047)
0a.10.7 Netapp-GNOC002(2017122871) Pool0 KZG8J3TD Netapp-GNOC002(2017122871)
0a.10.4 Netapp-GNOC002(2017122871) Pool0 KZG8J0WD Netapp-GNOC002(2017122871)
0a.10.1 Netapp-GNOC002(2017122871) Pool0 KZG8ED3D Netapp-GNOC002(2017122871)
0a.10.10 Netapp-GNOC002(2017122871) Pool0 KZG8AKRD Netapp-GNOC002(2017122871)
0a.10.8 Netapp-GNOC002(2017122871) Pool0 KZG8HZ9D Netapp-GNOC002(2017122871)
0a.10.0 Netapp-GNOC002(2017122871) Pool0 KZG8J3JD Netapp-GNOC002(2017122871)
0a.10.2 Netapp-GNOC002(2017122871) Pool0 KZG8J4MD Netapp-GNOC002(2017122871)
0a.10.11 Netapp-GNOC002(2017122871) Pool0 KZG7DHJD Netapp-GNOC002(2017122871)
0a.10.5 Netapp-GNOC002(2017122871) Pool0 KZG8J4SD Netapp-GNOC002(2017122871)
0a.10.6 Netapp-GNOC002(2017122871) Pool0 KZG8EEND Netapp-GNOC002(2017122871)
0a.10.12 Netapp-GNOC002(2017122871) Pool0 S142NEAD702268 Netapp-GNOC002(2017122871)
0a.10.17 Netapp-GNOC002(2017122871) Pool0 S142NEAD701929 Netapp-GNOC002(2017122871)
0a.10.14 Netapp-GNOC002(2017122871) Pool0 S142NEAD700003 Netapp-GNOC002(2017122871)
0a.10.16 Netapp-GNOC002(2017122871) Pool0 S142NEAD701198 Netapp-GNOC002(2017122871)
0a.10.18 Netapp-GNOC002(2017122871) Pool0 S142NEAD701276 Netapp-GNOC002(2017122871)
0a.10.19 Netapp-GNOC002(2017122871) Pool0 S142NEAD700018 Netapp-GNOC002(2017122871)
0a.10.13 Netapp-GNOC002(2017122871) Pool0 S142NEAD701294 Netapp-GNOC002(2017122871)
0a.10.15 Netapp-GNOC002(2017122871) Pool0 S142NEAD700141 Netapp-GNOC002(2017122871)

Re: Disk state showing "partner" but one 1 controller is present.

NetApp Release 8.1.2 7-Mode: Tue Oct 30 19:56:51 PDT 2012

Re: Disk state showing "partner" but one 1 controller is present.

If there are disks that are already owned by a node, 4a only initializes these disks. So apparently other shelf was owned by some other node. Just remove disk ownership and reassign.

Re: Disk state showing "partner" but one 1 controller is present.

While removing disk ownership giving an error.

 

Netapp-GNOC002*> disk remove_ownership 0b.11.21
disk remove_ownership: Disk 0b.11.21 is not owned by this node.

 

Out of 3, 2 enclosures showing right status, but the middle one showing in partner stat.

Re: Disk state showing "partner" but one 1 controller is present.

And if you use -f (force) option? Alternatively try "disk assign 0b.11.21 -s unowned -f"

Re: Disk state showing "partner" but one 1 controller is present.

Bad luck, its still showing the same error

 

Netapp-GNOC002*> disk assign 0b.11.21 -s unowned -f
disk assign: Disk is currently owned by the partner and cannot be assigned.

Re: Disk state showing "partner" but one 1 controller is present.

If you really has single node, you may need to verify that filer is not configured for HA. What "cf status" says now?

Re: Disk state showing "partner" but one 1 controller is present.

3-hrs back i have disable the CF.

 

Netapp-GNOC002*> cf status
Controller Failover disabled.
RDMA Interconnect is down (Link 0 down, Link 1 down).
Netapp-GNOC002*>

 

 

Below is the logs after i run 4a to initilize and clean up the configuration:- Zeroing of disk completed for Shelf 10 & 12 but for Shelf 11 its showing in partner mode.

 

Feb 24 16:20:11 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/0a.10.1 Shelf 10 Bay 1 [NETAPP X425_HCBEP1T2A10 NA00] S/N [KZG8ED3D]ggregate aggr0 has completed successfully
Feb 24 16:20:11 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/0b.12.0 Shelf 12 Bay 0 [NETAPP X425_HCBEP1T2A10 NA00] S/N [KZG3Z2XD]ggregate aggr0 has completed successfully
Feb 24 16:20:11 [localhost:raid.vol.disk.add.done:notice]: Addition of Disk /aggr0/plex0/rg0/0a.10.0 Shelf 10 Bay 0 [NETAPP X425_HCBEP1T2A10 NA00] S/N [KZG8J3JD]ggregate aggr0 has completed successfully
Feb 24 16:20:11 [localhost:wafl.aggr.btiddb.build:info]: Buftreeid database for aggregate 'aggr0' UUID '9636310c-197e-11e8-891e-123478563412' was built in 0 msec, scanning 0 inodes and restarting -1 times with a final result of starting.
Feb 24 16:20:11 [localhost:wafl.aggr.btiddb.build:info]: Buftreeid database for aggregate 'aggr0' UUID '9636310c-197e-11e8-891e-123478563412' was built in 0 msec, scanning 0 inodes and restarting 0 times with a final result of success.
Feb 24 16:20:11 [localhost:wafl.vol.add:notice]: Aggregate aggr0 has been added to the system.
Feb 24 16:20:11 [localhost:fmmb.instStat.change:info]: no mailbox instance on local side.
Feb 24 16:20:11 [localhost:fmmb.current.lock.disk:info]: Disk 0a.10.0 is a local HA mailbox disk.
Feb 24 16:20:11 [localhost:fmmb.current.lock.disk:info]: Disk 0b.12.0 is a local HA mailbox disk.
Feb 24 16:20:11 [localhost:fmmb.instStat.change:info]: normal mailbox instance on local side.
Feb 24 16:20:12 [localhost:fmmb.current.lock.disk:info]: Disk 0b.11.11 is a partner HA mailbox disk.
Feb 24 16:20:12 [localhost:fmmb.current.lock.disk:info]: Disk ?.? is a partner HA mailbox disk.
Feb 24 16:20:12 [localhost:fmmb.instStat.change:info]: missing lock disks, possibly stale mailbox instance on partner side.
exportfs [Line 1]: NFS not licensed; local volume /vol/vol0 not exported

 

Below alerts i am getting on console.

 

Alerts on Console:-

tapp-GNOC002> Mon Feb 26 07:17:00 GMT [Netapp-GNOC002:monitor.globalStatus.critical:CRITICAL]: Controller failover partner unknown. Controller failover not possible.
Mon Feb 26 07:17:07 GMT [Netapp-GNOC002:callhome.performance.snap:info]: Call home for PERFORMANCE SNAPSHOT
Mon Feb 26 07:17:09 GMT [Netapp-GNOC002:config.sameHA:warning]: Disk 0b.11.23 and other disks attached to the same port are dual-attached to the same adapter. For improved availability you should dual-attach them to separate adapters.
Mon Feb 26 07:17:09 GMT [Netapp-GNOC002:config.sameHA:warning]: Disk 0a.11.8 and other disks attached to the same port are dual-attached to the same adapter. For improved availability you should dual-attach them to separate adapters.
Mon Feb 26 07:21:07 GMT [Netapp-GNOC002:net.if.filterDrop:warning]: Protocol Filter: '5933' 'NBNS/UDP or NBDS/UDP' packets were dropped by the per-interface protocol filter during the last 24 hours.
Mon Feb 26 07:21:07 GMT [Netapp-GNOC002:net.if.mgmt.defaultGateway:warning]: route: Static or default route with gateway '10.184.56.1' is targeted to dedicated management port 'e0M'. Data traffic using this route might be throttled due to low bandwidth, or dropped if a protocol filter is configured.
Mon Feb 26 07:26:07 GMT [Netapp-GNOC002:coredump.findcore.partial:notice]: Partial core is missing 1 of 14 disks

Re: Disk state showing "partner" but one 1 controller is present.

[ Edited ]

Apparently your filer still believes it is part of HA Pair. Boot into maintenance mode and use ha-config command to check and if necessary reset HA state. Also you may need to remove partner-sysid loader variable.

 

https://library.netapp.com/ecmdocs/ECMP1367947/html/GUID-14AF35BF-BC87-4226-801D-ED0CF3FA0B0F.html

 

https://library.netapp.com/ecmdocs/ECMP1210206/html/GUID-F8FF2BB3-38C4-4FBD-A152-0941CA645559.html

 

Re: Disk state showing "partner" but one 1 controller is present.

 

After changing the ha-config from maintenance mode, i have assign disk to unowned -f and assign the disk again.

 

And it works. Thanks alot for your help.

 

But in future, if I need to enable HA. Again I have to go into maintenance mode and follow the same step. or it can be configured from the advanced mode.