ONTAP Discussions

Assign all data partitions to a single node

Andrew7193
4,416 Views

We have an A300 with a single shelf of disk. We want to maximize our data capacity in a single aggregate. The disk size is 3.8TB SSD. All of the scenarios run in the sizing tool by our VARs say we can get a single aggregate across two 23 disk raid groups for a total of 71.49 TB. However when I try to build the aggregate I am only able to add the 23. How do I add the other 23?

9 REPLIES 9

SpindleNinja
4,406 Views

When you edit or create an aggr you should see an add button across in the same line as “devices”.

 

However, you're not really gaining space as each raidgroup needs its own set of DP disks and I would still leave one whole spare. 

 

I would just have do an aggr on one controller and a second on the other honestly.  

Andrew7193
4,400 Views

So that is the tool. I am trying to do this on the AFF. For reasons I will not go into, we need to have the biggest aggregate we can as we have an SVM with 55TB of volumes that needs to move off of another FAS. So two 32 TB aggregates are not going to cut it. 

SpindleNinja
4,390 Views

An SVM functions across the cluster, so you can have an SVM having flexvols/flexgroups across multiple aggrs.   

 

If you put all the aggrs and volumes on one aggr/one controller. you're also limiting the amount of cores ontap can use for IO. 

 

Andrew7193
4,385 Views

I am aware of all of your points. However performance is not a worry. We have a single 55TB volume on that SVM that is currently at 87% capacity. We cannot split that volume across two aggregates. 

SpindleNinja
4,379 Views

ah ok.  you initially said "as we have an SVM with 55TB of volumes "  I took it as multiple volumes.  

 

is the volume worth the effort to migrate into a flexgroup with something like xcp?   

 

 

 

Andrew7193
4,369 Views

We do have multiple volumes it is just one of them is humongous. That volume also has 135,853,357 inodes so any king of a copy like xcopy is going to take forever. I do not know how it was allowed to get this big but it is up to me to fix it until we can get the size down to something manageable.

[email signature removed by moderator]

SpindleNinja
4,365 Views

Flexgroup with qtrees might be an option to split it up.   But yeah, requires a host based migration to get it to that.  

Andrew7193
4,243 Views

I did it the hard way since there isn't an easy way. Using advanced mode I assigned the data1 and data2 partitions to the single node and was able to create one single  aggregate using all of the partitions. However that did not leave enough spares for the other node to do a core dump in the case of a panic. 

SpindleNinja
4,238 Views

You still need to leave one spare.    

Public