well... there're two side of the medal
first:
if you really won't to have the best deduplication ratio you could make big volumes and place all luns within this volume - as you already know deduplication works on volume base - so dedup would be best now - but the problem is - having all luns within one volume if you create a snapshot on that netapp volume you would also have all luns snapshoted - so you loose granularity.
the other side:
if you place every lun in it's own volume - you'll have the granularity for snapshoting but you loose dedup - since everything within one wolume will be deduped but not across volumes. also in case you have to restore a complete datastore this setup makes it much faster to restore (since less data), if you have a moden backup software which restores single VMs you can forget about that.
so there isn't really an answer what's best - it just depends
e.g. one of my costumers placed all C:-drive vmdk of it's virtual maschines within one datastore - since C: contain the OS and OS is always the same - he has a geat dedup ratio - the "data"-drives, where the data or programs itselfs resides are on a diffrent datastore - also here - great dedup, but the backside - your VMs are splitted over multiple datastores
if i have a costumer who wants to thinprovision everything i recommend
- one thinprovisioned volume for each thinprovisioned lun - starting the lun size at about 2TB like you already mentioned
having such big vmdks i would create a big datastore with 8-10TB - since vmfs5 doesn't have that strict maxsize anymore (64tb total datastore size) it makes most sense.
having everything thinprovisioned mostly comes with overcommiting the aggregate so monitoring the aggregates freespace is mandatory
having sql luns deduped? it would give it a try - if you have multiple db-files on that luns (e.g. splitted db with more db-files for performance) this could make sense - i would enable it and see the dedup ratio - if it's less than 5-10% i would disable except having flash pool in use - for that case dedup could be interesting again - i think it wouldn't make sense if you have a lot of data permantenlty flushed and the database is reorganized regulary - so it also depends on the data in your database -
my rules for DBs- more static dbs: yes; more dynamic: no
dedup is just a job which runs at around midnight and the schedule should be costumized in my opinion not having all volumes starting the same time - so the impact to the system is very easy to isolate in cases you have performance issues.
hope that helped a bit for your decisions