Sizing AFF8080 with 3x1.6TB SSDs


I'm looking at adding an AFF8080 with 3 x 24 x 1.6TB SSD shelves.  What would be the best aggregate and spare configuration for maximum capacity to start and being able to add another 4th shelf later?  

We were thinking of assigning all odd # disks to node1 and even # disks to node 2 which would be 36 disks on each node.  Then we would probably add another 24 disks to that HA pair and would have another 12 disks per node (total = 48) later.


How would you configure the aggr's?  how many spares would you use and what size raid groups?







This is how I would configure the aggregates 🙂


Why would you create aggr1 and aggr2 for both nodes, and not just an aggr1?




mainly because I wouldn't want all my egss (drives) in one basket.  Generally the larger the aggregate the longer it will take to rebuild a failed drive - it is perfectly feasable to create 1 large aggregate on each node too.  SSD's tend not to fail as often as HDD so either option is fine.


Asisde form that if you use NetApp Synergy (which I did with yours) to build out your cluster, you can tell it to recommend NetApp best practice for the aggregate configuration.



Being a customer, how do you access ?




Synergy is a partner tool, your reseller will be able to assist.




Would adding PCI Flash Cache to the AFF8080 with 1.6TB SSD's benefit in any way whatsoever?  


Also, can anyone expand on what the OS changes and NVRAM usage are between the AFF and the FAS 8000?




Flash Cache is not supported in the AFF8080 - it's already an all flash array so no need to add flash cache.


The OS for AFF is the same as FAS - it's Clustered Data ONTAP (cDOT).  The OS is optimised for flash so the controllers will perform best with SSD.


Actually, AFF platforms DO support the use of FlashCase.


It is for very, very rare cases in which caching of metadata would provide such the benefit that it give the cluster. 


Maybe less than a 1% use-case for that, however. 


Do you have documentation on this?


Alot of people give the response that because the AFF is optimized already it doesnt need FLASH CACHE. (aka PAM Cards)


I have a workload that does in fact see high metadata workloads that the SSD's wont cache.



Your help is greatly appreciated.




Hi xiambi,


I think it would be beneficial to work with your NetApp or partner SE to validate this solution. It is an uncommon use-case, and you need a flashcache card per controller, and it's a financial outlay, so we'd want you to do it with as much input as possible before making the decision to purchase.




@AlexDawsonAND @NetApp_SEAL


I am in the process of reaching out to them now.


I agree with you that it will, however I believe more information is needed.

For example - Everyone says that latency when reading blocks from SSDs on AFFs is as low as the latency when reading from FC.


I think its based on the workload and what that workload entails.



Can you guys just give a confirmation if this will work in a AFF8080?



Thanks again for the help!


I cannot confirm - no.


We have a process called "Feature Product Variance Request" (FPVR), which can be used to add/change functionality of products - if this was a suitable and appropriate solution for you, I believe it would only be available through that process, which includes engineering validation and support.


From my knowledge raid rebuild times are based on the size and speed of the disk, not the size of the aggregate.  But I agree, split your aggregates across the two nodes.  Especially if you are using it for SAN.


Hi Corey


There are a number of factors affecting raid rebuild times including size and speed of the drives however, the number of drives in a RAID group, raid.reconstruct.perf_impact options (setting this option to high reduces the time it take to complete a RAID reconstruction and reduces foreground I/O performance.) TR3838 and Raid rebuild times are highly dependent on the workload profile on the system.


The conventional rule that smaller raid groups would reconstruct "no slower" (and likely faster) than larger raid groups applies. Thus, it aligns with NetApp recommendation of staying close to the default value provided by ONTAP.





I'm planning on using it for mostly Fiber Channel luns, but also some CIFS shares with SMB3, and some NFS.  Older best practices said to split the luns and CIFS/NFS into separate aggregates.  Is this still true with AFF SSDs aggregates?


AS all your aggreagtes are all flash it wont make any difference where you place your NAS or SAN data, it will all benefit from the performance drives.  you could have cifs volumes shared out to cifs clients and FC LUNs which reside in volumes on the same aggregate as your cifs vols with no performance impact.



Register for Insight 2021 Digital

INSIGHT 2021 Digital: Meet the Specialists 2

On October 20-22, gear up for a fully digital, totally immersive virtual experience with a downright legendary lineup of world-renowned specialists. Tune in for visionary conversations, solution deep dives, technical sessions and more.

NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner