ONTAP Discussions

4243 from a 7-mode to a cluster-mode

ESISMONDO
8,085 Views

Hello everyone,

we are running into an issue, i.e. we had a 4243 shelf moved from a dismissed 7-mode system to a new cluster mode system. This shelf was not formatted nor placed any of those disks in spare mode before moving.

We can actually see the 24 disks from the clustered. The problem is that disks are not seen as "spare" but as "unowned" and so we can't format them in cluster-mode with zeroingspares function.

This also leads to strange issues if we try to do any operation on those disks. Being more specific, if we assign any one of those disks to a controller, the system create a new aggregate with raid size 20, with the single disk and 19 "invisible non existant" disks in FAILED state. The aggregate is then invisible anywhere because is also put in an offline state.

Any suggestion how we should proceed?

Thank you very much.

Emanuele

10 REPLIES 10

aborzenkov
8,024 Views

Yes, systems import aggregate that is located on these disks. You had to destroy aggregate and zero spares before moving shelf to another filer.

Just destroy this new aggregate, that’s all. You do it just once after all necessary disks have been assigned.

ESISMONDO
8,025 Views

Thank you for the answer aborzenkov.

The problem is that, if the system imports the original aggregates while assigning disk to a controller (no problem in doing that) and then I try to remove those aggregates, the system answer is that it could not find the aggregate I want to delete...

So I'm stuck.

Do you think there is a solution or the only solution will be turning on the old 7-mode again and reconnecting the shelf and zeroing all disks?

aborzenkov
8,025 Views

OK, I must admit I have never had any issues with it using 7-Mode. I wonder if C-Mode behaves differently here. Let’s someone who is familiar with it chime in.

ESISMONDO
8,024 Views

I've moved this in cmode group.

davidrnexon
8,024 Views

Hi we are having the same issue,

The old 7-mode system had been shut off and disk shelves unplugged. When connected into the c-mode system I could still see the disk ownership as the 7-mode system. I'm able to reassign the disks to a new c-mode node however it see's the old aggregate.

I can't change state to spare as it says:

Error: command failed: Failed to unfail the disk. Reason: Disk is not currently

       failed.

I can't change the state to failed as it says:

disk fail: Cannot fail disk (volume/plex is offline).

Any other ideas ?

davidrnexon
8,024 Views

Hi Emanuele, further to my previous comment today I have the resolution for you. It is possible to delete a 7-mode aggregate from within a c-mode system. I've written up a tutorial on the full process in my blog:

http://www.sysadmintutorials.com/tutorials/netapp/netapp-clustered-ontap/netapp-7-mode-shelf-connected-to-cluster-mode-system/

scottgelb
8,024 Views

Excellent tutorial site… I read through and will visit again. I would preface this tutorial about use at your own risk and open a support case though.. I have had some cases where support preferred to use the “vreport” method to fix the vldb so it can see the foreign aggregate which can then be destroyed from the cluster commands and people will be able to extrapolate how they can make any 7m command available from the dblade.. in our run book I give an example of both methods to delete a foreign aggregate but preface that the user needs to open a case and use at their own risk.

Great stuff and thank you for the post…going into the CN1610 example you posted and saw a couple of things (ntp and dns for example) that I will add to my procedure list.

davidrnexon
8,024 Views

Hi Scott, thanks Yes I will add "Use at your own risk"

The vreport method, is this only if c-mode cannot see the 7-mode aggregate on the disks ?

scottgelb
8,024 Views

That method made cDot pick up the aggr that showed on the dblade but not aggr show in the cluster. It updated the vldb and we were able to destroy the aggr from the cluster.

Sent from my iPhone 5

lederman
5,939 Views

Happy to report this method also works with cDot aggrs left over on shelves.  I was able to reenable the aggr destroy command at the node shell and get rid of these foreign cDOT aggrs. (Note: must be in priv set advanced at nodeshell to run "nodescope.reenablecmds")

Thanks for the post!

Public