Completely agree with Nate here... fewer aggregates is what most of our customers run... Some customers require physical separation (different business units, secure data, prevent spillage, multi-tenant) and in those cases the business requirement dictates more aggregates.
There were some edge cases where a heavy random workload and a heavy sequential workload were better on separate aggregates, but in most cases workloads sharing an aggregate benefit from the additional spindles available to both.
I would get the required performance and I/O profile of the different data sets...then add growth over time to size it...the SPM tool is very good at doing this and you can specify sharing an aggregate or separate aggregates, then determine if the performance is met...then if you can meet performance with less or more aggregates you will have the analysis from SPM output and can also play what-if analysis on the workloads. This may help reduce the number of options based on what will work within the confines of the requirements.