ONTAP Hardware
ONTAP Hardware
Hi All,
Currently building out an AFF-A150 for a tech refresh while following some documentation my predecessor left me regarding his last build out of the AFF-A220 we will be replacing. Our use case is pretty small, only acting as an NFS datastore for one vCenter and only using 12/24 bays on the internal shelf of the A150. We only use one aggregate, one volume, one SVM. Prior to creating his aggregate he assigned all data1 and data2 partitions to one of the two nodes, and left the root partitions where they were at. I find that I am unable to complete this step as apparently the CLI command for it went away post 9.9.1. What would be the reason you would want both all data partitions to be owned by one of the nodes? If the disk had it's data1 partition owned by node1, and data2 partition owned by node2, when that disk is assigned to an aggregate which is in turn owned by a specific node, would it only have use of one of its two data partitions?
Either way, from what I can tell I can no longer manually assigned partitions to specific nodes. Just trying to determine the best way to configure my aggregate to get the most space out of it, while also incorporating spares obviously. If this answer is somewhat obvious I apologize, still relatively new to ONTAP and storage in general.
TIA
Hi @NetApp93 ,
Just to provide some context for those who find this post via Google in the future:
Q: What would be the reason you would want both all data partitions to be owned by one of the nodes?
A: One reason, which would be edge case, would be that you want to guarantee that the workload will not be performance impacted during a controller failure. In your current scenario the workload can only ever use the processing of a single node so when that node fails then the storage workload will migrate to the surviving node. Most customers want to balance workloads across both controllers so they can use the performance of both controllers and during a failure scenario they are happy with the performance impact when one of the controllers is offline.
Q: If the disk had it's data1 partition owned by node1, and data2 partition owned by node2, when that disk is assigned to an aggregate which is in turn owned by a specific node, would it only have use of one of its two data partitions?
A: From my understanding if a disk is partitioned (ADP) then whole disk ownership is redundant for that disk e.g. if you run "storage disk show -fields owner" it does not align with the output of "storage aggregate show -fields owner-name". When you view aggregate ownership with "storage aggregate show -fields owner-name" the owner is the node that owns the partitions of that aggregate, not the entire disk.
To expand on this:
I do hope that this information is useful for you.
Hi chamfer,
Thank you for your very detailed response. Super helpful information. Regarding the data being owned by one node, it always seemed to me that the reason my org has done 1 aggr, 1 NFS SVM, 1 volume is because we were matching that with one export policy to connect with one vCenter datastore. However upon actually inspecting the commands for creating the export policy I am now realizing it is applying to one vserver/SVM. Would this mean, that we could create two aggregates, one for each of my nodes (thereby utilizing both controllers for processing power), each with their own volume but both volumes under the same SVM, which would link both volumes for storage to that datastore? The tradeoff seemingly would be more overhead space taken away for root aggr but would double our processing power.
If that indeed would work it sounds like it would be well worth it (I'm not sure if I would lose out on any data partitions even, because the root aggr for the 2nd aggr would be using root partitions?). However I did already find a way to configure the system for a sole aggr and put all of the data aggr under one node, detailed in the below documentation. Would have to reverse all of that.
Hi @NetApp93,
Happy to assist!
So I understand where your org has come from and yes it is supported to use both partitions from single disk in the same aggregate (ref table https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/What_are_the_rules_for_Advanced_Disk_Partitioning) so that it could be a simple and more storage efficient for the data aggregate.
Q: Would this mean, that we could create two aggregates, one for each of my nodes (thereby utilizing both controllers for processing power), each with their own volume but both volumes under the same SVM, which would link both volumes for storage to that datastore?
A: You have two options here:
Statement: The tradeoff seemingly would be more overhead space taken away for root aggr but would double our processing power.
Response: Just to clarify as I feel you have a typo (written "root aggr", but should have written "data aggr")
Statement: If that indeed would work it sounds like it would be well worth it (I'm not sure if I would lose out on any data partitions even, because the root aggr for the 2nd aggr would be using root partitions?).
Response: Just to provide further clarification around your setup you would have:
The image that I attached shows the graphical representation of the aggregate layouts over the partitions and is within the URL https://docs.netapp.com/us-en/ontap/concepts/root-data-partitioning-concept.html
To directly answer your question you are not going to loose out on any data partitions other than have to use some data partitions for RAID parity for your additional data aggregate. I personally feel this is a good tradeoff to get 2x the performance when compared to having a single aggregate across all disks.
Additional information: I did forget that an advantage of a single aggregate is that it will give you a single storage efficiency pool (i.e. deduplication) when compared with two datastores. NetApp ONTAP only provides storage efficiencies at the aggregate level, so if you have a VM on one aggregate (VMware datastore) and there is a copy on the second aggregate (second VMware datastore) then there is no deduplication between these two VMs.
After reading the above some will ask "Can I achieve the same deduplication rate on a FlexGroup compared to a FlexVol? " the answer is no (ref https://kb.netapp.com/on-prem/ontap/DM/Efficiency/Efficiency-KBs/Can_I_achieve_the_same_deduplication_rate_on_a_FlexGroup_compared_to_a_FlexVol)
I hope that this helps and has not been confusing.
Hi chamfer, happy Friday!
Thanks again for your detailed response. After looking through my options, to me it seems as though FlexGroups are indeed the best solution to our use case. I do have some follow-on questions/concerns that I was hoping you might address.