AFF

Disk sanitize interrupted

Fredric
379 Views

Hi,

I trying to scratch the entire AFF A300 cluster before sending it back to Netapp(trade-in).

I did it on a FAS2620 without any problems, but on the A300 it interrups before it's completed (I have seen max 50% done).

 

I  boot ontap to menu, runs 9a on boths nodes, startup node1 into maintenance mode and then "disk assign all".

After starting disk sanitization ( disk sanitize start -p 0x22 <disk> ), get some warnings/errors but it starts.

 

*> disk sanitize start -r 0d.01.0
Apr 23 08:06:55 [cl01-02:raid.assim.tree.noRootVol:error]: No usable root volume was found!
ERROR: Failed to recognize disks: . Still continuing...
WARNING: Disk sanitization will remove all data from the selected disks and cannot be recovered.
Do you want to continue (y/n)? y

WARNING: The sanitization process might include a disk format.
If the system is power-cycled or rebooted during a disk format,
the disk might become unreadable. The process will attempt to
restart the format after 10 minutes.

The time required for the sanitization process might be significant,
depending on the size of the disk and the number of patterns and
cycles specified.
Do you want to continue (y/n)? y
Apr 23 08:06:59 [cl01-02:disk.outOfService:notice]: Drive 0d.01.0 (S3SGNF0M306362): message received. Power-On Hours: N/A, GList Count: 0, Drive Info: Disk 0d.01.0 Shelf 1 Bay 0 [NETAPP X357_S16433T8ATE NA55] S/N [S3SGNF0M306362] UID [5002538B:4932D5C0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000].
The disk sanitization process has been initiated. When it is complete, a message will be displayed on the console.
Apr 23 08:06:59 [cl01-02:disk.releaseFailed:error]: Disk release failed on 0a.01.0 CDB 0x5f:0601 - SCSI:not ready (2 4 1b)
*> disk sanitize status
ERROR: Failed to recognize disks: No disks to read.
Apr 23 08:08:24 [cl01-02:raid.assim.tree.noRootVol:error]: No usable root volume was found!
. Still continuing...
Sanitization for 0a.01.0 is 4% complete.

*> disk sanitize status
Apr 23 08:22:28 [cl01-02:raid.assim.tree.noRootVol:error]: No usable root volume was found!
ERROR: Failed to recognize disks: . Still continuing...
Sanitization for 0a.01.0 is 47% complete.
*> disk sanitize status
Apr 23 08:40:17 [cl01-02:raid.assim.tree.noRootVol:error]: No usable root volume was found!
ERROR: Failed to recognize disks: . Still continuing...
*> Apr 23 08:40:17 [cl01-02:diskown.diskWasStolen:notice]: Disk 0d.01.0 (S/N S3SGNF0M306362) has had its ownership changed so that it is no longer owned by this system. This can lead to a system panic if the disk is a filesytem disk.

Why is the ownership changed? Is that the reason it fails?

I have a case with Netapp, but since the service agreement ended last month I don't know if they will help me, waiting for answer.

 

//Fredric

1 ACCEPTED SOLUTION

Fredric
286 Views

Hi, all disks where assigned to the node before running the disk sanitization.

 

I got a answer from the Netapp support saying that the disks was successfully sanitizated.

So I will close this "case".

 

Thanks you for your help!

 

//Fredric

View solution in original post

4 REPLIES 4

TMACMD
337 Views

You are probably ok. The sanitization process is different on aff

 

 every time time it runs you lose a wear cycle. It’s not spinning media so it goes faster. 

Netapp SSD sanitizer will put zeros on every accessible area of the unit. That’s it. It’s not magnetic it’s digital.

Fredric
297 Views

Hi, thanks for the reply.

But probably is not good enough. I must know that no data is left on the disks. How can I verify that it's ok?

I see a lot of these in the system console log:

Apr 23 10:59:54 [cl01-01:disk.releaseFailed:error]: Disk release failed on 0a.01.11 CDB 0x5f:0601 - SCSI:not ready (2 4 1b)

 

I haven't found any documentation saying that is ok with all there type of errors or that All Flash systems should be treated specially.

 

//Fredric

andris
288 Views

Is the HA partner down (at LOADER) or removed? You don't want your HA partner to be active while you are doing this. Just assign the disks to one node.

Fredric
287 Views

Hi, all disks where assigned to the node before running the disk sanitization.

 

I got a answer from the Netapp support saying that the disks was successfully sanitizated.

So I will close this "case".

 

Thanks you for your help!

 

//Fredric

Public