ONTAP Hardware

DoD disk sanitize fails in maintenance mode

craig_schuer
3,485 Views

Hello, 

 

I'm trying to do a DoD 'disk sanitize start' command on a number of old disk shelves we have to ready them to be sold. We have a company policy that requires those drives to be DoD wiped before leaving our possession. I have a FAS8200 lab controller I'm using that's been upgraded to 9.7P10 and trying to wipe in maintenance mode. Drives and shelf models are different among the stack, but that hasn't been a problem in the past. 

 

I have been able to wipe several shelves already (I'm doing 4x 24 drive shelves at a time), but for some reason the current stack I'm trying to wipe is not wanting to cooperate. Every time I run a 'disk sanitize start -p 0x55 -p 0xaa -r -c 3 disk_name' command, it starts the wipe (as verified by a 'disk sanitize status' command), but when I queue up a 2nd drive, it gives me a string like the one below... 

 

*> disk sanitize start -p 0x55 -p 0xaa -r -c 3 1d.41.19
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.22 Shelf 56 Bay 22 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U58A] UID [500605BA:0068A5E4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.16 Shelf 56 Bay 16 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6JT8A] UID [500605BA:0067B070:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.10 Shelf 56 Bay 10 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6VU5A] UID [500605BA:00687CC4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.15 Shelf 56 Bay 15 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U0PA] UID [500605BA:0067AFEC:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.4 Shelf 56 Bay 4 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6UAAA] UID [500605BA:0067B004:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.3 Shelf 56 Bay 3 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U6LA] UID [500605BA:00686F48:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.23 Shelf 56 Bay 23 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T7ZA] UID [500605BA:0068BB8C:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.11 Shelf 56 Bay 11 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6XG6A] UID [500605BA:0067AB54:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.12 Shelf 56 Bay 12 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6VREA] UID [500605BA:0068B0A4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.20 Shelf 56 Bay 20 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG59K9A] UID [500605BA:0067AE9C:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.7 Shelf 56 Bay 7 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T8AA] UID [500605BA:0067C624:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.2 Shelf 56 Bay 2 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T8HA] UID [500605BA:00687D04:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.13 Shelf 56 Bay 13 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6U8KA] UID [500605BA:0068AC4C:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.21 Shelf 56 Bay 21 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6TZRA] UID [500605BA:0068AF80:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.8 Shelf 56 Bay 8 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG6XE1A] UID [500605BA:0067AAF0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.
Jun 28 11:18:47 [seaclust01-05:raid.assim.disk.nolabels:EMERGENCY]: Disk 1a.56.9 Shelf 56 Bay 9 [NETAPP X306_HMARK02TSSM NA04] S/N [YGG4T8DA] UID [500605BA:0068B098:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] has no valid labels. It will be taken out of service to prevent possible data loss.

 

I run a 'disk sanitize status' command and nothing comes up after attempting to wipe the 2nd drive. The 1st drive then shows as FAILED when running a 'disk show -v' command.  Running a 'disk unfail' command doesn't unfail it. 

 

In looking at different KB's and discussions, I tried to do a label wipe, wipev1  and makespare of the drive, but that doesn't fix it. Same thing, able to start the sanitize process on 1 drive, but then all fail when trying to sanitize a 2nd drive. 

 

Running a 'aggr status -r' also shows the same output as above. 

 

Please help! 

1 ACCEPTED SOLUTION

Michael_Borbe
2,638 Views

a controller reboot and the command "label makespare <diskname>" in maintenance mode fixed my problem.

View solution in original post

6 REPLIES 6

craig_schuer
3,447 Views

Thanks for the reply, but I am in maintenance mode (ONTAP isn't installed) and none of those commands work. Those drive seem to have "no label" not "bad label". I've tried unfailing the drive, but get this response...

 

*> disk unfail 1d.41.20

Failure bytes for unfail were not written due to error 40;

If failure area has reached its capacity, it may be cleared by reloading disk firmware.

If the disk has been previously failed by RAID, the disk may need to be removed from
the failed disk registry.

To see the disks in the registry:
>raid_config info showfdr

To delete a disk in the registry:
>raid_config info deletefdr

 

Neither of those 2 commands seem to work in 7mode. 

 

I also tried zeroing the drives in maintenance mode, but that command doesn't seem to be available either. 

 

AlexDawson
3,424 Views

Do you know the history of the drives? Try connecting only those drives to the system and then doing an option 4 boot and see if it can create a root aggregate on them.. If they have already been sanitized, they may be "failed".  Or they may have been degaussed. Or they may be encrypted disks in locked mode.

Michael_Borbe
2,650 Views

Hello Craig, I have exactly the same problem. Have you found a solution ?

Michael_Borbe
2,639 Views

a controller reboot and the command "label makespare <diskname>" in maintenance mode fixed my problem.

AlexDawson
2,607 Views

priv set diag; disk unfail -s <slot> may have also worked.

Public