If we use provisioned OnTap RAID6 LUN's on Soalris hosts, in the case of either using ZFS or Solaris Volume Manager, any reasons we should use redundant type of raid configurations, any type of raids or mirror?
Since disks are already protected on NetApp filers, can I just simply use devices without any mirror or raids for extra protection?
With the use of ONTAP LUN's there is no need to do additional Host OS RAID functions. Your LUN's are protected on the NetApp storage solution for hardware failures based on using RAID-6 / RAID-DP or RAID-TEC along with the use of spares. Now that doesn't mean that you can't use Host based Volume Managers to do things like taking multiple LUN's and putting them together to get a larger LUN through the use of the Volume Manager to either stripe or concatenate multiple LUN's into a bigger LUN that you would make available to your Host system.
It is completly up to you on how you manage your storage. Most Solaris admins either use zfs or Veritas volume manager to work with their storage.
It may help if I clarify a little how ONTAP will write data to the LUN...
All writes are not written directly to the disk. They are held in NVRAM first (at this point the write is acknowledged) after certain thresholds are met it is then written in stripes to the RAID groups that make up the aggregate. All writes will be distributed as evenly as possible across all spindles. The larger the RAID groups the larger the stripes that can be written and so hitting more spindles. Writing in full stripes also reduces the parity calculations and therefore allows the writes to happen quicker. I've not gone into too much detail, however this is the high level process.
The LUNs will not write to specific disks, but will try to write to as many disks in the aggregate as possible, depending where the free space is. So long as you have nice even RAID groups (roughly the same number of disks in each RAID group) with sufficient number of disks in each, as well as plenty of free space there is no need to stripe on the host. The aggregates will supply the redundancy and performance so long as they have been designed correctly.
The only possible scenario is where you have a definite disk bottle neck, then a possible solution could be to present another LUN from a different node and aggregate - thereby hitting different NVRAM and disks.
Sorry, maybe I need to expand a little on the write acknowledgement...
Since the write is acknowledged by the NVRAM, the point in having a good aggregate/RAID group layout is to ensure that when the NVRAM flushes the writes to disk (consistency point or CP), they happen as quickly as possible. If there are delays then a busy NVRAM may fill and ask the host to back-off causing a performance degradation.
Based on what you just described, and most part the same as I understood. I don't understand in reality why people are still using ZFS or SVM. Can you think of any reasons, other than what you just pointed out using two different LUN's under different nodes might be a reason?
Host-based volume managers, like ZFS, SVM, LVM, Windows Dynamic Disk etc still have a place with an underlying SAN - you can use them to do upgrades and migrations seamlessly, as well as LUN replacements if needed, but they are not generally used for data protection.