My reply is a bit dated.
But, I too, is very new to NetApp.
I see that Thomas Glodde is saying that it is best to keep the backend as a single disk or have at most a couple of disks. And, I lean towards agreeing in terms of manageability (less calls in the middle of the night that one disk is full or almost full).
There are a couple of additional areas I will like to explore:
a) As you 're using MS SQL Server, I assume you 're running on MS Windows. It seems that if you are using HBA Cards, the max Queue Depth is 256 or so. In addition, it seems this max Queue Depth is based on each Disk; and to get more 'Queue Depth' you would have to expose more physical\logical disks. Does anyone know if this are Logical or Physical Disks?
b) Also, it is always helpful to pre-allocate MS SQL Server datafiles (before needing them). You might be able to reap a side-effect of (pre) allocating in big enough increments \ sizes to gain a semblance of having your data (whether regular, LOB, or log) in more contiguous chunks.
c) if you do not care much for pre-allocation, your datafile growth size might help, as well.
d) Also via "Local Security Policy", grant "Perform Volume Maintenance Tasks" to the Account that the "MS SQL Server" Service is running as. This affords you the benefits of "Instance File Initialization".
Now my follow-up questions:
a) is anyone aware of NetApp specific Performance Counters that can be used within MS Windows
b) Does NetApp have Native Performance \ Throughput Measurement Tools. The tools I have seen in the open-market tend to want to create their own traffic\data. But, it our case we will more likely want to measure throughput against our own normal business data \ traffic