Hi John,
Let me try to help out here.
Some things to consider:
1) You can have a maximum of 500 volumes per controller, and if you are running a HA pair, you can have a maximum of 500 volumes for BOTH machine. (In case of a takeover, the total amount of volumes cannot exceed 500). Would that be enough for you?
2) Deduplication is per volume. If you use qtrees instead, you could (could!) benefit from deduplication. If you use one lun per volume, you do not benefit from deduplication, since you cannot deduplicate between volumes.
3) On the question of performance, you need to know how WAFL works. A volume is a database entry only, a LUN is a database entry too. An empty volume or/and an empty LUN does not take any space in the aggregate.
If you look at harddisk performance, you only have to consider the aggregate size and how full it is. Since a volume and a lun is only a virtual entity, it does not matter what size it is. (You could make a 10TB volume on a 1TB aggregate) The only harddisk performace hit you get is when your aggregate gets physically full, like 90% full. And remember, logically full is not the same as physically full, because you could benefit from deduplication. (100 gbyte logical could be as low as 10 gbyte physical, as example...). If you are concerned about performance, you need to monitor the physical usage of the aggregate and make sure it does not go over 90%.
Therefore, the answer is, there is no performance impact whatsoever if you create a 99 GB LUN in a 100 Gbyte volume.
4) On snapshot reserve, the new default is 5% in data ontap 8.1 - It would be better to monitor carefully!
Hope this helps
Kind regards
Dirk Oogjen
Certified NetApp Instructor