ONTAP Hardware
ONTAP Hardware
Good day. Let me start by stating I do not want to save data. This is starting from scratch.
I have moved 12 drives from a FAS2020 into a DS4243 that is ran from a FAS2040. I thought that the drives from the 2020 were zeroed out but I was mistaken. I can see the drives in the CLI of both controllers on 2040. I do not see the drives if I run DISK SHOW or DISK SHOW -N. I do see them listed when I run DISK SHOW -A (see below) but they are still listing the old 2020 filer name (sv-storage-1b or 2b). I was not able to remove ownership because “disk remove_ownership: Disk 0d.01.4 does not exist”. Any way to get these drives to work in the 4243 with out having to go back to the 2020 to zero out the drives? This is a repurposed design for a disk-to-disk backup solution I am working on. Thanks!
The 0c.00.xx is the FAS 2040 disks (active/active 3 drives per controller)
The 0d.01.xx is the DS4243 with the newly added 1TB SATA drives (replaces 300GB SAS)
sv-storage-1a> disk show -a
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0c.00.2 sv-storage-1a((***) ) Pool0 WD-(***)
0c.00.0 sv-storage-1a((***) ) Pool0 WD-(***)
0c.00.1 sv-storage-1a((***) ) Pool0 WD-(***)
0c.00.11 sv-storage-2a((***) ) Pool0 WD-(***)
0c.00.9 sv-storage-2a((***) ) Pool0 (***)
0c.00.10 sv-storage-2a((***) ) Pool0 (***)
0d.01.4 sv-storage-2b((***) ) Pool0 (***)
0d.01.1 sv-storage-1b((***) ) Pool0 (***)
0d.01.0 sv-storage-2b((***) ) Pool0 (***)
0d.01.7 sv-storage-1b((***) ) Pool0 (***)
0d.01.5 sv-storage-1b((***) ) Pool0 (***)
0d.01.3 sv-storage-1b((***) ) Pool0 (***)
0d.01.2 sv-storage-2b((***) ) Pool0 (***)
0d.01.10 sv-storage-2b((***) ) Pool0 (***)
0d.01.8 sv-storage-2b((***) ) Pool0 (***)
0d.01.6 sv-storage-2b((***) ) Pool0 (***)
0d.01.9 sv-storage-1b((***) ) Pool0 (***)
0d.01.11 sv-storage-1b((***) ) Pool0 (***)
Solved! See The Solution
Hi,
please try to force the ownership command in advanced mode: disk remove_ownership -f 0d.01.4
Hi,
This is related to disk ownership, first you have to remove the ownership at older controller then only you can add those disk drives to new controller. Fine now you no need to go back to older controller to do all this. Use the following command:
"disk assign disk_name -s system_id -f"
ex:
disk assign 0d.01.11 -s 1903120345 -f
disk remove_ownership isn't available in priv set mode?
if you run storage show
does sanown not enabled show at the bottom of the system? if it does, you don't have software disk ownership enabled.
jgshntap,
Below is the output of the storage show. To clarify, remove_ownership is available in priv advanced mode but it errors out with “disk remove_ownership: Disk 0d.01.4 does not exist”
sv-storage-2a*> storage show
Slot: 0a
Description: Fibre Channel Host Adapter 0a (QLogic 2432 rev. 2)
Firmware Rev: 4.5.2
FC Node Name: 5:00a:0
FC Packet Size: 2048
Link Data Rate: 1 Gbit
SRAM Parity: Yes
External GBIC: No
State: Disabled
In Use: No
Redundant: Yes
Slot: 0b
Description: Fibre Channel Host Adapter 0b (QLogic 2432 rev. 2)
Firmware Rev: 4.5.2
FC Node Name: 5:00a:098
FC Packet Size: 2048
Link Data Rate: 1 Gbit
SRAM Parity: Yes
External GBIC: No
State: Disabled
In Use: No
Redundant: Yes
Slot: 0c
Description: SAS Host Adapter 0c (LSI Logic 1068E rev. B3)
Firmware Rev: 1.31.02.00
Base WWN: 5:00a
State: Enabled
In Use: Yes
Redundant: No
Phy State: [0] Enabled, 3.0Gb/s (9)
[1] Enabled, 3.0Gb/s (9)
[2] Enabled, 3.0Gb/s (9)
[3] Enabled, 3.0Gb/s (9)
Slot: 0d
Description: SAS Host Adapter 0d (LSI Logic 1068E rev. B3)
Firmware Rev: 1.31.02.00
Base WWN: 5:00a0
State: Enabled
In Use: No
Redundant: Yes
Phy State: [0] Enabled, 3.0Gb/s (9)
[1] Enabled, 3.0Gb/s (9)
[2] Enabled, 3.0Gb/s (9)
[3] Enabled, 3.0Gb/s (9)
Slot: 0e
Description: IDE Host Adapter 0e
No hubs found.
Shelf name: PARTNER.shelf0
Channel: PARTNER
Module: A
Shelf id: 0
Shelf UID: 50:0c
Shelf S/N: N/A
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[IN0 ] OK 7 3.0 0 0 0 0 0 2
[IN1 ] OK 7 3.0 0 0 0 0 0 2
[IN2 ] OK 7 3.0 0 0 0 0 0 2
[IN3 ] OK 7 3.0 0 0 0 0 0 2
[OUT0] UNUSED 0 NA 0 0 0 0 0 1
[OUT1] UNUSED 0 NA 0 0 0 0 0 1
[OUT2] UNUSED 0 NA 0 0 0 0 0 1
[OUT3] UNUSED 0 NA 0 0 0 0 0 1
[ 0 ] OK 7 3.0 0 0 0 0 0 6
[ 1 ] OK 7 3.0 0 0 0 0 0 8
[ 2 ] OK 7 3.0 0 0 0 0 0 6
[ 3 ] EMPTY 0 NA 0 0 0 0 0 0
[ 4 ] EMPTY 0 NA 0 0 0 0 0 0
[ 5 ] EMPTY 0 NA 0 0 0 0 0 0
[ 6 ] EMPTY 0 NA 0 0 0 0 0 0
[ 7 ] EMPTY 0 NA 0 0 0 0 0 0
[ 8 ] EMPTY 0 NA 0 0 0 0 0 0
[ 9 ] OK 7 3.0 0 0 0 0 0 6
[ 10 ] OK 7 3.0 0 0 0 0 0 16
[ 11 ] OK 7 3.0 0 0 0 0 0 6
Shelf name: 0c.shelf0
Channel: 0c
Module: B
Shelf id: 0
Shelf UID: 50:0
Shelf S/N: N/A
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[IN0 ] OK 7 3.0 0 0 0 0 0 2
[IN1 ] OK 7 3.0 0 0 0 0 0 2
[IN2 ] OK 7 3.0 0 0 0 0 0 2
[IN3 ] OK 7 3.0 0 0 0 0 0 2
[OUT0] UNUSED 0 NA 0 0 0 0 0 1
[OUT1] UNUSED 0 NA 0 0 0 0 0 1
[OUT2] UNUSED 0 NA 0 0 0 0 0 1
[OUT3] UNUSED 0 NA 0 0 0 0 0 1
[ 0 ] OK 7 3.0 0 0 0 0 0 6
[ 1 ] OK 7 3.0 0 0 0 0 0 9
[ 2 ] OK 7 3.0 0 0 0 0 0 7
[ 3 ] EMPTY 0 NA 0 0 0 0 0 0
[ 4 ] EMPTY 0 NA 0 0 0 0 0 0
[ 5 ] EMPTY 0 NA 0 0 0 0 0 0
[ 6 ] EMPTY 0 NA 0 0 0 0 0 0
[ 7 ] EMPTY 0 NA 0 0 0 0 0 0
[ 8 ] EMPTY 0 NA 0 0 0 0 0 0
[ 9 ] OK 7 3.0 0 0 0 0 0 7
[ 10 ] OK 7 3.0 0 0 0 0 0 17
[ 11 ] OK 7 3.0 0 0 0 0 0 7
Shelf name: PARTNER.shelf1
Channel: PARTNER
Module: A
Shelf id: 1
Shelf UID: 50:
Shelf S/N:
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[SQR0] OK 7 3.0 0 0 0 0 0 1
[SQR1] OK 7 3.0 0 0 0 0 0 1
[SQR2] OK 7 3.0 0 0 0 0 0 1
[SQR3] OK 7 3.0 0 0 0 0 0 1
[CIR4] EMPTY 7 NA 0 0 0 0 0 0
[CIR5] EMPTY 7 NA 0 0 0 0 0 0
[CIR6] EMPTY 7 NA 0 0 0 0 0 0
[CIR7] EMPTY 7 NA 0 0 0 0 0 0
[ 0 ] OK 7 3.0 0 0 0 0 0 7
[ 1 ] OK 7 3.0 0 0 0 0 0 4
[ 2 ] OK 7 3.0 0 0 0 0 0 6
[ 3 ] OK 7 3.0 0 0 0 0 0 6
[ 4 ] OK 7 3.0 0 0 0 0 0 4
[ 5 ] OK 7 3.0 0 0 0 0 0 4
[ 6 ] OK 7 3.0 0 0 0 0 0 4
[ 7 ] OK 7 3.0 0 0 0 0 0 4
[ 8 ] OK 7 3.0 0 0 0 0 0 4
[ 9 ] OK 7 3.0 0 0 0 0 0 4
[ 10 ] OK 7 3.0 0 0 0 0 0 4
[ 11 ] OK 7 3.0 0 0 0 0 0 4
[ 12 ] EMPTY 7 NA 0 0 0 0 0 0
[ 13 ] EMPTY 7 NA 0 0 0 0 0 0
[ 14 ] EMPTY 7 NA 0 0 0 0 0 0
[ 15 ] EMPTY 7 NA 0 0 0 0 0 0
[ 16 ] EMPTY 7 NA 0 0 0 0 0 0
[ 17 ] EMPTY 7 NA 0 0 0 0 0 0
[ 18 ] EMPTY 7 NA 0 0 0 0 0 0
[ 19 ] EMPTY 7 NA 0 0 0 0 0 0
[ 20 ] EMPTY 7 NA 0 0 0 0 0 0
[ 21 ] EMPTY 7 NA 0 0 0 0 0 0
[ 22 ] EMPTY 7 NA 0 0 0 0 0 0
[ 23 ] EMPTY 7 NA 0 0 0 0 0 0
[SIL0] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL1] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL2] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL3] DIS/UNUSD 7 NA 0 0 0 0 0 0
Shelf name: 0d.shelf1
Channel: 0d
Module: B
Shelf id: 1
Shelf UID: 50:
Shelf S/N:
Term switch: N/A
Shelf state: ONLINE
Module state: OK
Partial Path Link Invalid Running Loss Phy CRC Phy
Disk Port Timeout Rate DWord Disparity Dword Reset Error Change
Id State Value (ms) (Gb/s) Count Count Count Problem Count Count
--------------------------------------------------------------------------------------------
[SQR0] EMPTY 7 NA 0 0 0 0 0 0
[SQR1] EMPTY 7 NA 0 0 0 0 0 0
[SQR2] EMPTY 7 NA 0 0 0 0 0 0
[SQR3] EMPTY 7 NA 0 0 0 0 0 0
[CIR4] OK 7 3.0 0 0 0 0 0 1
[CIR5] OK 7 3.0 0 0 0 0 0 1
[CIR6] OK 7 3.0 0 0 0 0 0 1
[CIR7] OK 7 3.0 0 0 0 0 0 1
[ 0 ] UNKWN 7 NA 0 0 0 0 0 1
[ 1 ] UNKWN 7 NA 0 0 0 0 0 1
[ 2 ] UNKWN 7 NA 0 0 0 0 0 1
[ 3 ] UNKWN 7 NA 0 0 0 0 0 1
[ 4 ] UNKWN 7 NA 0 0 0 0 0 1
[ 5 ] UNKWN 7 NA 0 0 0 0 0 1
[ 6 ] UNKWN 7 NA 0 0 0 0 0 1
[ 7 ] UNKWN 7 NA 0 0 0 0 0 1
[ 8 ] UNKWN 7 NA 0 0 0 0 0 1
[ 9 ] UNKWN 7 NA 0 0 0 0 0 1
[ 10 ] UNKWN 7 NA 0 0 0 0 0 1
[ 11 ] UNKWN 7 NA 0 0 0 0 0 1
[ 12 ] EMPTY 7 NA 0 0 0 0 0 0
[ 13 ] EMPTY 7 NA 0 0 0 0 0 0
[ 14 ] EMPTY 7 NA 0 0 0 0 0 0
[ 15 ] EMPTY 7 NA 0 0 0 0 0 0
[ 16 ] EMPTY 7 NA 0 0 0 0 0 0
[ 17 ] EMPTY 7 NA 0 0 0 0 0 0
[ 18 ] EMPTY 7 NA 0 0 0 0 0 0
[ 19 ] EMPTY 7 NA 0 0 0 0 0 0
[ 20 ] EMPTY 7 NA 0 0 0 0 0 0
[ 21 ] EMPTY 7 NA 0 0 0 0 0 0
[ 22 ] EMPTY 7 NA 0 0 0 0 0 0
[ 23 ] EMPTY 7 NA 0 0 0 0 0 0
[SIL0] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL1] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL2] DIS/UNUSD 7 NA 0 0 0 0 0 0
[SIL3] DIS/UNUSD 7 NA 0 0 0 0 0 0
DISK SHELF BAY SERIAL VENDOR MODEL REV
--------------------- --------- ---------------- -------- ---------- ----
0c.00.0 0 0 WD-NETAPP X282_WSUMM NA00
0c.00.1 0 1 WD-NETAPP X282_WSUMM NA00
0c.00.2 0 2 WD-NETAPP X282_WSUMM NA00
0c.00.9 0 9 NETAPP X282_SMOOS NA02
0c.00.10 0 10 NETAPP X282_SMOOS NA02
0c.00.11 0 11 WD-NETAPP X282_WSUMM NA00
SHARED STORAGE HOSTNAME SYSTEM ID
------------------------- ----------
sv-storage-1a
sv-storage-2a (self)
sv-storage-2a*>
sv-storage-2a*>
Phani2,
To be clear, you are saying I need to remove these drives and bring them back to the FAS2020 and remove disk ownership there? Isn't there anyway I can do that on the 2040? What if the 2020 was dead? Again, I do not want to save data on these drives. I just want to reuse them. Also since the 2020 was setup with two controllers and had the aggr setup on 6 drives per controller, would I even be able to remove ownership? Would this need to be done in maintance mode and would it let me do this if there is a aggr or vol on the drives?
I ask all these questions because there brackets are not the same for the 2020 and the ds4243 and that means a lot of screws to deal with
Thanks in advance!
Ok, so you are running software ownership .. this is good.
Did you try to sanatize the drives?
I have pulled all but one drive from shelf. This is what I am getting...
sv-storage-1a*> disk show -a
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0c.00.2 sv-storage-1a() Pool0 WD-
0c.00.0 sv-storage-1a() Pool0 WD-
0c.00.1 sv-storage-1a() Pool0 WD-
0c.00.11 sv-storage-2a() Pool0 WD-
0c.00.9 sv-storage-2a() Pool0
0c.00.10 sv-storage-2a() Pool0
0d.01.11 sv-storage-1b() Pool0
sv-storage-1a*> disk remove_ownership 0d.01.11
disk remove_ownership: Disk 0d.01.11 is not owned by this node.
sv-storage-1a*> disk sanitize start 0d.01.11
The cluster is currently disabled. The cluster must remain disabled during sanitization. Would you like to continue (y/n)? y
WARNING: The sanitization process may include a disk format.
If the system is power cycled or rebooted during a disk format
the disk may become unreadable. The process will attempt to
restart the format after 10 minutes.
The time required for the sanitization process may be quite long
depending on the size of the disk and the number of patterns and
cycles specified.
Do you want to continue (y/n)? y
disk start: Couldn't find 0d.01.11.
sv-storage-1a*>
So i'm clear, the current cluster is 1a/2a?
And when you put it disk 0d.01.11 it's listed as being owned by a node that doesn't exist?
You got it. (bad naming of the controllers I know)
Cluster 1b is the 2020 that I pulled the drives from that is not connected to any of this any longer. These 1TB drives are not in sv-storage-1a / sv-storage-2a cluster that has a DS4243 connected with only the 1TB drives that were pulled from the 2020. No other drives are in the 4243.
0d.01.11 sv-storage-1b() Pool0
How the hell do you post a new questions on this board!?!?!?!?!? My god it should not be this hard!!
Ben,
Relax, it's not hard, you just need to learn how to navigate the communities and click "start new discussion"
Ok, Back to disk issue...
Interesting issue.. Try to fail it
sv-storage-1a*> disk show
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0c.00.2 sv-storage-1a() Pool0 WD-
0c.00.0 sv-storage-1a() Pool0 WD-
0c.00.1 sv-storage-1a() Pool0 WD-
0c.00.11 sv-storage-2a() Pool0 WD-
0c.00.9 sv-storage-2a() Pool0
0c.00.10 sv-storage-2a() Pool0
sv-storage-1a*> disk show -a
DISK OWNER POOL SERIAL NUMBER
------------ ------------- ----- -------------
0c.00.2 sv-storage-1a() Pool0 WD-
0c.00.0 sv-storage-1a() Pool0 WD-
0c.00.1 sv-storage-1a() Pool0 WD-
0c.00.11 sv-storage-2a() Pool0 WD-
0c.00.9 sv-storage-2a() Pool0
0c.00.10 sv-storage-2a() Pool0
0d.01.11 sv-storage-1b() Pool0
sv-storage-1a*> disk fail 0d.01.11
disk fail: Disk 0d.01.11 not found
sv-storage-1a*>
Im curious, are there any Foreign aggregates?
I don't know how to look for that but I ran this on each controller:
sv-storage-1a*> aggr status -v
Aggr State Status Options
aggr0 online raid4, aggr root, diskroot, nosnap=off,
raidtype=raid4, raidsize=7,
ignore_inconsistent=off,
snapmirrored=off,
resyncsnaptime=60,
fs_size_fixed=off,
snapshot_autodelete=on,
lost_write_protect=on
Volumes: vol0
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal
sv-storage-2a*> aggr status -v
Aggr State Status Options
aggr0 online raid4, aggr root, diskroot, nosnap=off,
raidtype=raid4, raidsize=7,
ignore_inconsistent=off,
snapmirrored=off,
resyncsnaptime=60,
fs_size_fixed=off,
snapshot_autodelete=on,
lost_write_protect=on
Volumes: vol0
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal
you got me for now..
You are on old hardware for sure, and running your aggr in raid4 (blah!) Just sayin...
ok thanks for your help. I moved the drives back to the 2020 and will try to clean them there and move back.
And just to comment on the raid4...
I am building this for strictly a backup repository before I go to tape or replicate using our backup solution's replication option. The RAID 4 was intentionally setup on each controller so that I do not have as many drives installed and running. This gives me spare parts and saves power as the rest of the drives are actually pulled out about one inch on the array. 3 drives per controller is all that is setup for the aggr/root vol. That is why the 4243 shelf is needed. I have x12 1TB drives that I am planning on running in a raid group for the backup data.
Hi,
please try to force the ownership command in advanced mode: disk remove_ownership -f 0d.01.4
I did try this option before but it failed. Then I then noticed that one controller didn't see all the drives on the ds4243. When I ran disk remove_ownership -f 0d.01.0 it worked on the other controller it worked.
Note for future readers: this failed if you didn't include the "-f" switch.
Ah, the -f switch did it.. Didn't think to use force.. Nice work