ONTAP Discussions
ONTAP Discussions
Hello,
We have a FAS2650 (2 nodes) with the following disks/configuration below when the profession installer set this up. This NAS will serve CIFS and NFS (no database). If I were to have just one aggregate assigned to a node, would the second node be idle, do nothing (like on passive)?
One other option is to break the 1st aggregate up and set it as raid-tec with 21 disks (set it as 29 disk max allowed) and down the road if we feel more disks and speed is needed, we could obtain the disks from the 2nd aggregate. Data has been moved to NetApp onto the 2nd aggregate but its not in production.Thoughts?
We currerntly have about 50 TB of data stored on our HNAS and will likely be under 100 TB in the next 5 years with the new NAS.
Current settings.
12 -900 GB, 10K drives (OS for now, 2 aggregates)
36 - 8TB sata, 7.2K drives (data storage, 2 aggregates)
aggr1
rg0 group has 14 sata disks
rg1 group has 7 sata disks
1 spare
aggr2
rg0 has 13 sata drives.
1 spare
Thanks,
SVHO
Hi,
Thank you for writing to NetApp communities and regarding your query it is just nice you can play around as this is a New setup.
We have a FAS2650 (2 nodes) with the following disks/configuration below when the profession installer set this up. This NAS will serve CIFS and NFS (no database). If I were to have just one aggregate assigned to a node, would the second node be idle, do nothing (like on passive)?
Yes that's right whichever the node is holding the ownership of disk will take all the load incase if you have just one aggregate on your FIRST NODE all I/O hits reach to FIRST NODE but not SECOND NODE
One other option is to break the 1st aggregate up and set it as raid-tec with 21 disks (set it as 29 disk max allowed) and down the road if we feel more disks and speed is needed, we could obtain the disks from the 2nd aggregate. Data has been moved to NetApp onto the 2nd aggregate but its not in production.Thoughts?
Yes you can destroy and recreate aggregate with change RAID GROUP size , given the controller's strictly not in production. Also i understand that ROOT VOLUME was not being hosted on Aggr1 or Aggr2 of the respective nodes just to make sure we dont mess up with the configurations.
Please let us know for further queries.
Thanks,
Nayab
In that case you can go ahead and destroy the aggregate and create it to fit your requirements 🙂
Thanks,
Nayab