ONTAP Discussions

On top of OnTap raid6 LUN's, should a mirror / raid be created on Solaris 10 servers?

heightsnj
3,886 Views

If we use provisioned OnTap RAID6 LUN's on Soalris hosts, in the case of either using ZFS or Solaris Volume Manager, any reasons we should use redundant type of raid configurations, any type of raids or mirror?

 

Since disks are already protected on NetApp filers, can I just simply use devices without any mirror or raids for extra protection?

 

Looking forward your inputs!

7 REPLIES 7

sgrant
3,871 Views

Hello, correct, any LUNs presented to Solaris will be protected by the RAID groups in the hosting aggregates, which are protected against multiple disk failures.

 

 

The TR paper Oracle Databases on ONTAP: http://www.netapp.com/us/media/tr-3633.pdf addresses RAID protection as well as the ZFS best practices to ensure maximum performance.

 

Hope this helps,

 

Thanks,

Grant.

sbotkin
3,866 Views

With the use of ONTAP LUN's there is no need to do additional Host OS RAID functions.  Your LUN's are protected on the NetApp storage solution for hardware failures based on using RAID-6 / RAID-DP or RAID-TEC along with the use of spares.  Now that doesn't mean that you can't use Host based Volume Managers to do things like taking multiple LUN's and putting them together to get a larger LUN through the use of the Volume Manager to either stripe or concatenate multiple LUN's into a bigger LUN that you would make available to your Host system.  

 

It is completly up to you on how you manage your storage.  Most Solaris admins either use zfs or Veritas volume manager to work with their storage.

 

We do however recommend that if you use Snapshots as your backup method that you use one of our software products to manage those, SnapCenter being the application of choice.  You can find additional information on SnapCenter at: https://mysupport.netapp.com/documentation/docweb/index.html?productID=62400&language=en-US

 

Hope this helps..

heightsnj
3,852 Views

For the need of large size of LUN's, or the purpose of using striping, all can be achieved by OnTap / WAFL, using increasing the LUN size or written across the entire aggregate.

 

Sorry for my insistency, I am just not sure of what points to use any redundant OnTap devices in ZFS or SLV?

Plus, if disks are protected again in an extra layer, would that need more time to work on it?

 

I read TR-3633, but the document did not mention about my questions raised here.

sgrant
3,791 Views

It may help if I clarify a little how ONTAP will write data to the LUN...

 

All writes are not written directly to the disk. They are held in NVRAM first (at this point the write is acknowledged) after certain thresholds are met it is then written in stripes to the RAID groups that make up the aggregate. All writes will be distributed as evenly as possible across all spindles. The larger the RAID groups the larger the stripes that can be written and so hitting more spindles. Writing in full stripes also reduces the parity calculations and therefore allows the writes to happen quicker. I've not gone into too much detail, however this is the high level process.

 

Please see Storage Subsystem Configuration Guide: http://www.netapp.com/us/media/tr-3838.pdf for more info.

 

The LUNs will not write to specific disks, but will try to write to as many disks in the aggregate as possible, depending where the free space is. So long as you have nice even RAID groups (roughly the same number of disks in each RAID group) with sufficient number of disks in each, as well as plenty of free space there is no need to stripe on the host. The aggregates will supply the redundancy and performance so long as they have been designed correctly.

 

The only possible scenario is where you have a definite disk bottle neck, then a possible solution could be to present another LUN from a different node and aggregate - thereby hitting different NVRAM and disks.

 

Hope this helps.

 

Thanks,

Grant.

sgrant
3,772 Views

Sorry, maybe I need to expand a little on the write acknowledgement...

 

Since the write is acknowledged by the NVRAM, the point in having a good aggregate/RAID group layout is to ensure that when the NVRAM flushes the writes to disk (consistency point or CP), they happen as quickly as possible. If there are delays then a busy NVRAM may fill and ask the host to back-off causing a performance degradation.

 

The FAQ: Consistency Point will explain in a lot more detail: https://kb.netapp.com/support/s/article/faq-consistency-point?language=en_US

 

 

heightsnj
3,736 Views

 

Based on what you just described, and most part the same as I understood. I don't understand in reality why people are still using ZFS or SVM. Can you think of any reasons, other than what you just pointed out using two different LUN's under different nodes might be a reason?

AlexDawson
3,651 Views

Host-based volume managers, like ZFS, SVM, LVM, Windows Dynamic Disk etc still have a place with an underlying SAN - you can use them to do upgrades and migrations seamlessly, as well as LUN replacements if needed, but they are not generally used for data protection.

Public