ONTAP Hardware
ONTAP Hardware
Hi all,
we have FAS2620 that comes with 12 disk (2TB). During the installation fo the cluster the disks have been partitioned and have root and data partition and the ownership was assigned automatically. Half of the disks are owned by the first node and the other half by the second node.
I am just wondering if I can change the ownership of some disk that belongs to the second node and assign them to the first node so that it would be possible to create an aggregate of a bigger size. The root partitions of the disks assigned to the second node are at the moment used by the root aggregate of this node.
Any ideas, best practicies?
Thanks!
You're talking about configuring it as an Active / "passive" config. i.e. just one large aggr on the first controller. The second controller will be the failover and still needs its root aggr to function.
This is doable, but you don't assign the disk, you re-assign the data paratition.
It'll go something like this:
-> set -priv adv
Review the disk config:
-> disk show -fields data-owner, root-owner
-> disk removeowner -disk x.x.x -data true
-> disk assign -disk x.x.x -data true -owner NODEx
-> disk show -fields data-owner, root-owner
Here's official documentation on this topic:
Andris, thank you for providing me with that link. I think we will go for an active-passive cluster.
Unfortunately, we have already created two aggregates (as suggested by the OnCommand system manager wizard: one on the first node and second on the second node) and one SVM with 3 volumes (at the moment all data was deleted).
The procedure says that it is designed for nodes for which no data aggregate has been created from the partitioned disks.
Can I perform the following steps to create an active-passive configuration?
-Offline the data volumes and SVM root volume and delete them,
-Stop the SVM and delete them
-Destroy both aggregates
Then, would I be able to zero data partitions, assign them to one node and create this configuration like described in the procedure?
Thanks!
Try this:
Thats looks good. The one reminder would be to hold back one disk's root-data partition for 1 spare root and 1 spare data partition to handle any failures in node 1 root, node 2 root or node 1 data.
Thank you guys! I will implement it.
I'd like to make sure that I understand it correctly. How many spares should I leave?
At the moment I have two spare disk with two partitions each (one on the node-01 and one on the node-02):
cluster::*> storage disk partition show -container-type spare
Usable Container Container
Partition Size Type Name Owner
------------------------- ------- ------------- ----------------- -----------------
1.0.10.P1 1.67TB spare Pool0 node-01
1.0.10.P2 143.7GB spare Pool0 node-01
1.0.11.P1 1.67TB spare Pool0 node-02
1.0.11.P2 143.7GB spare Pool0 node-02
The following disks are assigned to the second node that I would like to make a passive node after I destroy aggr1_data_node_02:
cluster::*> storage disk partition show -owner-node-name node-02
Usable Container Container
Partition Size Type Name Owner
------------------------- ------- ------------- ----------------- -----------------
1.0.1.P1 1.67TB aggregate /aggr1_data_node_02/plex0/rg0 node-02
1.0.1.P2 143.7GB aggregate /aggr0_node_02/plex0/rg0 node-02
1.0.3.P1 1.67TB aggregate /aggr1_data_node_02/plex0/rg0 node-02
1.0.3.P2 143.7GB aggregate /aggr0_node_02/plex0/rg0 node-02
1.0.5.P1 1.67TB aggregate /aggr1_data_node_02/plex0/rg0 node-02
1.0.5.P2 143.7GB aggregate /aggr0_node_02/plex0/rg0 node-02
1.0.7.P1 1.67TB aggregate /aggr1_data_node_02/plex0/rg0 node-02
1.0.7.P2 143.7GB aggregate /aggr0_node_02/plex0/rg0 node-02
1.0.9.P1 1.67TB aggregate /aggr1_data_node_02/plex0/rg0 node-02
1.0.9.P2 143.7GB aggregate /aggr0_node_02/plex0/rg0 node-02
1.0.11.P1 1.67TB spare Pool0 node-02
1.0.11.P2 143.7GB spare Pool0 node-02
Can I reassign the data partitions 1.0.1.P1, 1.0.2.P1, ... ,1.0.9.P1 and also 1.0.11P2 to node-01, leaving only the following as spares?
Partition Size Type Name Owner
------------------------- ------- ------------- ----------------- -----------------
1.0.10.P1 1.67TB spare Pool0 node-01
1.0.10.P2 143.7GB spare Pool0 node-01
1.0.11.P2 143.7GB spare Pool0 node-02
Thanks in advance!