One or two SVMs really comes down to design considerations with your environment. Are the two Hyper-V clusters going to contain different customer operations and potentially be managed by separate hypervisor administrators? If so, separating them into two SVMs would let you leverage the secure multi-tenancy features that SVMs provide (i.e. separate SVM admin accounts, network segregation, etc).
If you're not separating the Hyper-V clsuters for security reasons and you have basically the same kinds of operations/tenants in both, then separating them doesn't really buy you anything. You'll end up creating (at least) two iSCSI LIFs for the SVM - one per node. Hyper-V cluster#1 will connect to the iSCSI LIF on node#1 (where it's LUN lives) and Hyper-V cluster#2 will connect to the iSCSI LIF on node#2.
If you haven't already evaluated, be sure to look into the SnapManager for Hyper-V for your point-in-time-restoration backup copies. We're we've been using the VMware equivalent and it's a great product stack. Additionally, since you'll be running these Hyper-V clusters on SATA, be aware of your performance during a boot-storm. WIthout some sort of flash in front of those spindles, hypervisors are very impactful when it comes to I/O - especially if you reboot lots of your guests simultaneously.
Also, be aware that with SMB 3.0 you can leverage Hyper-V datastores over an SMB connection if you didn't want to bother with iSCSI:
http://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cifs-hypv-sql%2FGUID-BC313C3E-ADE2-48F6-8E88-2141BCBFD006.html&resultof=%22Hyper-...
I've not directly utilized this solution (since we're an ESXi shop) but just something to research as you implement your infrastructure.
Good luck,
Chris