The short answer is of course "it depends". So - on what does it depend?
For CIFS (and NFS) the data LIF could be hosted on any node in the cluster while the volume being accessed could also be hosted on any node in the cluster. The key things to remember are:
1. The node that owns the LIF that receives the CIFS (or NFS) request is the node that does all the protocol work - checking with domain controller on security, mapping between Windows and UNIX security models, processing the actual protocol request, determining the data needed, etc.
2. The node that owns the volume/aggregate is the node that does any disk access needed to get physical data blocks to meet the request.
When the two nodes described above are not the same, the first node will communicate with the second over the cluster backplane network. Implications get interesting.
So - consider the structure of a CIFS SVM. Assume volumes are defined across the cluster and mounted as your needs require. Each CIFS access starts at the root volume, as the volume path to be shared starts with "/". This every CIFS access requires a touch point to the root volume of the SVM. As paths are traversed, volumes on multiple nodes might be touched depending on the mount structure.
With respect to a LIF, it is always best to have the LIF to server the CIFS request on the same node as the target volume exists. Best case is to also have the root volume on that node and to mount the target volume directly under the root so no additional node traversal might be needed. Of course, that's impractical when you have a large CIFS setup - defeats the purpose of having a big cluster. So options:
One LIF - place on the node with the SVM root volume, allow to migrate to any other node as needed. Technically you only need one LIF. However, as you place volumes on more than one node, you increase the need for the backplane network. At low total system load this isn't really an issue. But if you have significant SnapMirror traffic, volume move traffic, or total CIFS traffic trying to leverage that backplane, it could add to the response time.
Two LIFs - place on the nodes of the HA pair with the root volume, round robin in DNS the ports to which users connect. Provides direct paths to more nodes where data may be and more total bandwidth that the nodes might provide for user data, but actually multiplies the randomness of when the cluster backplane is used. You can do the math - can't be sure which node will receive the protocol - half the time it will be on the node that does not own the SVM root volume. Half the time it won't be on the node that owns the target volume. You could use special naming and mapping via IP port at the client, but that would be an administrative nightmare.
However - ONTAP provides a feature set to accomplish that automatically. First - load sharing mirrors. This is a special SnapMirror created for the root volume of an SVM that serves CIFS or NFS (doesn't apply for SAN protocols). You have your SVM root volume. Then you create an LS mirror on each node where you might want to process data requests (you don't have to do every node in the cluster). The mirror is a read-only copy of the root volume that allows each node to have a local root volume copy of the SVM without needing the backplane. You will need to create a schedule to keep the mirrors updated. Also, if you script volume actions, you'd need to manually trigger a mirror update to make any changes immediately available or wait until the next scheduled replication. For example, while the SVM root volume will immediately know about a volume junction-path mount, the LS-mirrors won't until the mirrors are update. When mirrors exist, normal actions including creating a share work only with the data in the mirrors. Easy thing to forget.
The second feature is request redirection. For CIFS, it works like this. Define a share. The target volume of that basic share lives on a specific node. When a user connects on any LIF to that share, if the LIF is not on the node that owns the target volume AND there is a LIF in the SVM on the owning node, respond with a redirect using the IP address of the target node. Kinda like a poor man's DFS setup. This only works for the basic share, not any subsequent movement down the shared tree of information. The redirection feature is an option that can be enabled per SVM.
Both these features address the issues desribed in the "two LIFs" section above. And they can spread to all other nodes - for instance 4 LIFs, one per node. 4 LS-mirrors and CIFS redirection enabled. Or if you need more bandwidth you could go 2 LIFs per node, 8 total, for some significant power. At that scale you'd definitely want LS-mirrors and redirection enabled.
Of course you can apply all of these in any combination. You could still use LS-mirrors and just one LIF. That way wherever the LIF is the root volume is ready to go. I've used this style on SVMs that are of limited needs to not consume un-needed IP addresses for instance. Create one LIF per node on just one HA pair - perhaps the data for the SVM will never be anywhere but on that pair.
There is no perfect answer to how many you need. Any choice is possible, any choice is valid based on your specific requirements. ONTAP has features to make the best use of whatever number of LIFs makes sense for you.