A 500GB drive right sizes to 413.194GB and after WAFL it right sizes to 371.877GB.
For your 14 drive aggregate... 12 data drives * 371.877 = 4.36TB. But then subtract .05% aggregate accounting... 4.36 * .995 = 4.34TB ... then the default aggregate reserve is 5% so 4.34TB * .95 = 4.12TB.... close to your 4.14 to see where the space goes. You can lower the aggregate snap reserve "snap reserve -A" but won't get more than 4.34TB usable by doing that.
For the free space issue, something isn't right below... please post "df -Ah", "df -h" and "aggr show_space -h", "aggr status" and "vol status" output... from those 5 commands we should be able to see where the space is...
The space usable you are looking for is in volume snapshot space. CIFSVOL and vol2 are 500gb and 1228gb in user space but also have 125gb and 307gb reserved for snapshots. It looks like no snapshot schedule is in place since used is 0gb for these volumes. With snapshot space you are using 4208gb close to the 4213 reported.. -h output difference..
Most run snapshots and we always recommend snapshots. If you're not going to use snaposhots (let us know why first.. Would be a separate debate on here ) then you can remove snap reserve with "snap reserve volname 0"... But not something we recommend unless just a scratch pool or no snaps needed..