For our systems (mostly FAS8Ks) the limits are 24 (NAS) and 24 (SAN). I honestly can't speak to "why" these limits exist - I recall a discussion in a class long ago about why the limits were originally imposed but maybe some wise NetApp guru can weigh in on that part of the question.
After digging through some rusty memory banks, I seem to recall that it has something to do with the maximum number of target ports (LIFs) supported by ALUA. Since you're adding at least one iSCSI target LIF per node in a cluster I'm suspecting there's a hard limit as to how many potential target ports ALUA/MPIO can be configured from the client (initiator) perspective.
I've done some unsuccessful digging as to what that number is from a Linux/Windows perspective - if that is indeed the limitation. I see other storage vendors list "maximum number of iSCSI ports" as well, so it sounds like this isn't a NetApp imposed limitation...
As I recall when I asked my local NetApp SEs the same question, the limits were based on the ability of the cluster to maintain consistency of both the data and internal cluster structures.
Consider - in a LUN scenario, multiple data access requests might come in on multiple paths, perhaps through multiple nodes. All the eventual disk access is run through the node that owns the aggregate on which the volume contain the LUN exists, but all the nodes have to ensure that the right data is written/read based on order of arrival of the requests. A LUN looks like a physical disk and should act like it to the server(s) where the LUN is mapped. There is an expectation of serial results - write followed by read should return the previously written data. If the server for some reason writes data along a non-optimized path with the read immediately following along an optimized path to the cluster nodes, the cluster still has to enforce the write first then the read.
This requires some not insignificant coordination between all the nodes when serving LUNs. I'm sure the actual limit for stable LUN access is higher than 8 nodes, but of course NetApp would back that down a little to ensure stable operation. You'll note that the limits vary based on hardware capability as well, so processing power factors in beyond just the number of nodes. Same reasons why smaller nodes get lower aggregate/volume size limits.
With 9.x, the SAN limits increase to 12 nodes for newer hardware and some of the larger older hardware. This indicates continued tuning and testing of the LUN access coordination system.
For NAS, the same coordination is not required. A NAS system has the assumption that there will potentially be multiple independent readers and writers to the same file, which is why NFS and CIFS define locking mechanisms within their protocols. The burden of serializing operations falls to the consumers of the NAS files rather than the cluster. Coordinating the lock requests is a much lower burden in a NAS environment than enforcing access serialization for LUNs, so the operational limits for the cluster are higher for a pure NAS cluster.
The HWU has the per node and per ONTAP version limits that apply. Beyond that documentation I'm not sure there is anything else official and public. Certainly, I don't have any specific documentation or articles that support the SAN node count limit. The explanation is my best recollection of conversations with assorted NetApp SEs and other resources since I started working with Clustered Data Ontap - mostly because I really like to understand these details and grab whatever tidbits on internal operations I can whenever I can.