ONTAP Discussions

Zero/erase/reuse disks with 7-mode aggregate in cDOT/cmode

filipsneppe
11,493 Views

Hi,

I set up my first real-life cDOT setup yesterday, and I am planning on reusing some diskshelves that still housed some 7-mode aggregates. After changing the ownership of the disks, I can see some remnants of the old, 7-mode aggregates on those disks (from the "disk" command), but I cannot simply offline/destroy those 7-mode aggregates like I was used to doing in 7-mode:

na-croupt::*> disk show

                                                                                           Usable           Container

Disk                   Size Shelf Bay Type        Position   Aggregate Owner

---------------- ---------- ----- --- ----------- ---------- --------- --------

na-croupt-01:0c.16  827.7GB     1   0 aggregate   data       aggr0_sata1t

                                                                       na-croupt-01

na-croupt-01:0c.17  827.7GB     1   1 aggregate   data       aggr1_sata1t

                                                                       na-croupt-01

na-croupt-01:0c.20  827.7GB     1   4 aggregate   data       aggr1_sata1t

                                                                       na-croupt-01

na-croupt-01:0c.23  827.7GB     1   7 aggregate   data       aggr1_sata1t

                                                                       na-croupt-01

. . .

na-croupt::*> aggr show

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status

--------- -------- --------- ----- ------- ------ ---------------- ------------

aggr0_na_croupt_01_root

           239.0GB   11.14GB   95% online       1 na-croupt-01     raid_dp,

                                                                   normal

aggr0_na_croupt_02_root

           239.0GB   11.14GB   95% online       1 na-croupt-02     raid_dp,

                                                                   normal

2 entries were displayed.

na-croupt::*> aggr delete -aggregate aggr1_sata1t

Warning: Are you sure you want to destroy aggregate "aggr1_sata1t"?

{y|n}: y

Error: command failed: Aggregate "aggr1_sata1t" does not exist.

I also tried the following, to no avail:

- running the aggr destroy command from the node, but the "destroy" subcommand has been removed.

- halting one node and booting into maintenance mode to destroy the aggregate from there, but I just get:

*> aggr status

the system appears to have no disks!

unable to run aggr command

No root aggregate or root traditional volume found.

You must specify a root aggregate or traditional volume with

"aggr options <name> root" before rebooting the system.

Presumably because a halt of one node moved all the disks to the other node (behavior seems different from 7-mode) (this is a switchless two-node cluster).

My next course of action will be to unassign the disks in cmode, then boot one node into maintenance mode, assign the disks and hopefully destroy the aggregate from maintenance mode, but ideally I would like to know how to solve this problem without having to reboot at all, as I assume this is a scenario that can happen in real-life (i.e. reuse some diskshelves from a 7-mode configuration).

Anybody any ideas ? Thanks in advance!

Filip

3 REPLIES 3

scottgelb
11,491 Views

Call support and they can "reenable" something in the nodeshell to delete the aggregate with the node up and running. There is a alternate method to add the aggr to the vldb so the cluster can destroy the aggr, but support will most likely reenable the aggr destroy command from the node temporarily to resolve this. Will take only a few minutes then you can zero and reuse the drives.

Sent from my iPhone 5

filipsneppe
11,491 Views

Thanks, it's indeed possible and described in a kb (I think kb1013046) that is not publicly available. Support was able to help me out. Thanks!

scottgelb
11,491 Views

Very good. I didn't want to post the commands since support should be involved in these.  But once you know you can do it again   I recommend clearing out on the 7-Mode system before connecting to cDOT but often we may have a shelf and no 7-Mode system to connect first... but a quarantine single node cluster is possible now too... really like where we are at now with cDOT and am using 8.2P4 now on all our installs.

Public