Network and Storage Protocols

Luns OR Volumes - When using NFS



We are deploying 11g RAC on Linux using FAS32 series NetApp Filer

We will be using Oracle Direct NFS to access the storage and will be creating various ASM Disk groups to store data, clusterware and backup etc

Couple of questions:

1. In our scenario, how the storage would be presented to Linux servers? Would it be in form of LUNS or Volumes?

How should IT configure storage chunks to be presented to the database servers? We are going to use Oracle Direct NFS to connect to those chunks / storage on NetApp Filer

We read somewhere that LUNS are presented to server if using ISCSI or FC but if using NFS, it would be in in the form

of volumes. Is that true? if yes, can you give an example? For example assuming that the aggregate is 2 TB, and out of that we created 3 Flex Volumes of 300 GB each. 3 ASM disk group of 300 GB each with external redundancy would be created using those 3 volumes. Should IT create 3 luns of 100 GB each from each of the 3 volumes and then present those LUNS to the database server?

2. Is there a one to one mapping between Flex Volumes and LUNS? Can multiple LUNS be carved out from single flex vol? Do we even need LUNS when using NFS?





Hi Kamran,

when you are using direct NFS, you give out the volume names to the sys admin, so that they can add to mount tab

for your second quest: yes, you can create multiple LUNs in a single volume but, it is not recommended to create multiple LUNs.

create 1 LUN per volume, so that you can easily drill down deep when you have any sort of issues.

and when you are using NFS, LUNs dont come into picture.

NFS deals at volume level.

If you are using FC, then LUNs come into picture.

hope this helps.



Hi Vijay

Thanks a lot for your response

Let us suppose that we have a 2 TB aggregate based on 10 physical disks (configured in double parity) and want to create a volume of 100 GB  When we create this 100GB volume, do we have the option of specifying which underlying physical spindles to be used for creating that 100 GB volume OR by default the NetAPP filer will spread the data chunks / blocks across ALL the 10 disks?

What we are trying to figure out is that how would the NetAPP filer balance the volumes I/O.

Also, assuming that we create 4 volumes on this 2 TB aggregate, is there a possibility that one of these volumes could fail while the other continue to work normally? If yes, how? and what would be our options to fix it

We would be placing different oracle files on different dedicated volumes and hence trying to determine how to best layout the volumes to ensure maximum availability



there is no way to mention to create a vol on specific disks

NetApp will take care of spreading data across all disks.

go through the WAFL process, you will know how the data is written to cache and then to disks. netapp will spread the blocks in such a way that it can reach those blocks in a min amount of time.

the only way vol can fail is when you bring down the vol to offline mode.

reg the best practice to put oracle on netapp, i am sure you can find some kind of doc on 'now' site.


Just adding some informations:

1- The best practices for oracle with ASM is on TR-3329, just search on website.

2- What you can do according to volumes I/O is use flexshare, so you can prioritize some volumes, but it will occurs only if the controller is using all CPU.