2012-07-19 01:01 AM
What is the best practice to create a LUN in the Volume?
Do I need to create one LUN per Volume or I can use the same Volume for multiple LUNs?
If I using the same Volume for multiple LUN, how can I restore a single LUN?
If I have to provision a 100 GB LUN what should be the size of volume with best practice?
Solved! SEE THE SOLUTION
2012-07-19 09:13 AM
For Qns 1
The best practise is to create one LUN per volume.
For Qns 2
u can restore a LUN by creating a LUN clone(Parental volume + Snapshot) on the particular LUN.
The recommended size for Volume to create 100 GB LUN is 220 % of the LUN size( 100% fractional Reserve + 100% for overwrites +20% for snapshot reserve).
2012-07-19 10:17 AM
If we set both fractional reservation and Snapshot reservation
to 0 % still it will take 20 % extra space
from the Aggregate .I think it is just because of Auto Grow. But my question is
if I create a 99 GB LUN on 100 GB Volume is there any performance impact.
2012-07-19 10:47 AM
Let me try to help out here.
Some things to consider:
1) You can have a maximum of 500 volumes per controller, and if you are running a HA pair, you can have a maximum of 500 volumes for BOTH machine. (In case of a takeover, the total amount of volumes cannot exceed 500). Would that be enough for you?
2) Deduplication is per volume. If you use qtrees instead, you could (could!) benefit from deduplication. If you use one lun per volume, you do not benefit from deduplication, since you cannot deduplicate between volumes.
3) On the question of performance, you need to know how WAFL works. A volume is a database entry only, a LUN is a database entry too. An empty volume or/and an empty LUN does not take any space in the aggregate.
If you look at harddisk performance, you only have to consider the aggregate size and how full it is. Since a volume and a lun is only a virtual entity, it does not matter what size it is. (You could make a 10TB volume on a 1TB aggregate) The only harddisk performace hit you get is when your aggregate gets physically full, like 90% full. And remember, logically full is not the same as physically full, because you could benefit from deduplication. (100 gbyte logical could be as low as 10 gbyte physical, as example...). If you are concerned about performance, you need to monitor the physical usage of the aggregate and make sure it does not go over 90%.
Therefore, the answer is, there is no performance impact whatsoever if you create a 99 GB LUN in a 100 Gbyte volume.
4) On snapshot reserve, the new default is 5% in data ontap 8.1 - It would be better to monitor carefully!
Hope this helps
Certified NetApp Instructor
2012-07-19 05:09 PM
In HA cluster configuration, the 500 vol limit applies to each head individually, so the overall limit for the HA pair is doubled (500 + 500). And I think this doubles to 1000 per head in DOT 8.1.1. If you plan to use NDU (and you should), then the limits will be somewhat lower, depending on model, etc.
2012-07-19 11:43 PM
the idea you had isn't appropriate. In HA as you know the problem comes up after takeover. It means you have an maximum of 500 vol per controller, but in takeover mode you have both nodes on one which means you have only 500 overall as John mentioned.
The best idea depending on traffice is to define luns on qtrees, but you have to consider lun allignment avoiding performance degration.
To optimize space consuption I recommend to rethink about thin provisioning - no fractional reserve, snapshots not more as required autogrow on ...
There is a good TR for on NetApp Support - http://media.netapp.com/documents/tr-3965.pdf
2012-07-24 11:52 AM
Your first point is incorrect, from storage admin guide for aggrs and vols "In an HA configuration, these limits apply to each node individually, so the overall limit for the pair is doubled."
-disk maximums are pre HA
-aggr vol maximums are pre node.