ONTAP Discussions

Reorganicing disks on a CDOT metrocluster 8.3.1

mario_grunert
4,861 Views

Hello, i have a fresh installed small 10TB (20TB unmirrored) Metrocluster to manage, unfortunatly there was not much planning upfront and the company who set it up divided 50:50 and we want have 70:30.  It is CDOT 8.3.1 it has 2 24x Disk shelfs. 

Can i savely offline and destroy the MVP volumes, are there ways to save the data first ?

How can i destroy one half to built it new up with fewer disks?

3 REPLIES 3

niels
4,646 Views

Hi Mario,

 

unfortunately there is no way to destroy the MDV volumes. They are an essential part of the MetroCluster and required for replication of MCC-specific metadata.

The only chance would be to move the MDV volumes to a different aggregate, but I guess you don't have enough spare disks available to temporarily create one.

 

In case your system sends Autosupport, could you please provide hostname or serialnumber of one of the nodes? I could look into the layout and see if I find a way.

 

regards, Niels

 

---

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both.

mario_grunert
4,617 Views

SN# 211547000226 yes it send autosupports.

I have all commands in the documentation which were used to create this metrocluster, but not the values which were put in any wizzards which maybe pop up. Can i disolve the cluster and rebuild it - how ? 

niels
4,586 Views

Hi Mario,

 

you can always rip and rebuild. But that's a rather lengthy process.

You would need to initialize all disks, which destroys the cluster(s) and everything that has been pre-configured.

You'd then need to follow the "MetroCluster(TM) Installation and Configuration Guide" that can be found here:

http://mysupport.netapp.com/documentation/docweb/index.html?productID=62093&language=en-US

 

To keep the configuration you already have I'd suggest the following on the node you want to take disks away from:

 

- re-configure the RAID type for the existing data aggregate from RAID DP to RAID4. This will free up a spare disk in each pool

--> aggr modify -aggregate <aggr-name> -raidtype raid4

 

- create a small mirrored aggregate. You need to be in advanced mode to be able to create such a small aggregate (it will be expanded at the end)

--> set advanced

--> aggr create -aggregate <new-aggr-name> -diskcount 4 -raidtype raid4 -mirror true -force-small-aggregate true

 

- aggregate creation will take some time as the former parity disk needs to get zeroed.

- the system might complain about not having any spare disks left (this is expected)

 

- once the aggregate is created, move the MDV volumes to that new aggregate (still need to be in advanced mode)

--> vol show -vserver <node-name>

- move the volumes that listed as "online"

--> vol move start -vserver <cluster-name> -volume MDV_CRS_<some-ID>_A -destination-aggregate <new-aggr-name>

--> vol move start -vserver <cluster-name> -volume MDV_CRS_<some-ID>_B -destination-aggregate <new-aggr-name>

 

- offline and destroy the old aggregate

--> aggr offline <aggr-name>

--> aggr delete >aggr-name>

 

- disable disk-autoassign to be able to re-assign disks as you want them to be

--> disk options modify autoassign off

 

- reassign the disks to the node and pool you want them to be. DOUBLE-CHECK POOL ASSIGNMENT and leave anough spares for the cluster to grow the small aggregate and a spare per pool.

--> disk show -fields owner-id,owner,pool

--> disk removeowner -disk <disk-name>

- go to other node and perform disk assignment

--> disk assign -disk <disk> -pool <pool>

 

Now clean up

- reset disk autoassignment

--> disk options modify autoassign on

 

- change RAID type of the small aggregate to RAID DP

--> aggr modify -aggregate <new-aggr-name> -raidtype raid-dp

 

- add additional disks to the data aggregates at both sides (leave a spare!) For RAID DP an aggregate must contain at least 5 disks.

--> aggr add <new-aggr-name> -diskcount <number of disks>

 

- rename the aggregate

--> aggr rename <new-aggr-name> -newname <old-aggr-name>

 

- zero disks on both clusters by running the following command on both sides

--> disk zero spares

 

You are done.

 

 

regards, Niels

 

---

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both.

 

Public