ONTAP Hardware
ONTAP Hardware
Hello,
can anybody explain me, what is the point of creating multiple FlexVols in one aggregate?
As I understand, it's possible to use one FlexVol comprising the whole aggregate, and after this divide FlexVol using Qtrees if there is a need.
So from my point of view FlexVol is quite unnecessary layer in NetApp disk structure.
Thanks.
BR,
Danas
Solved! See The Solution
as it has been already said, there are several other reasons, but snapshotting is usualy the easiest way to explain a new netapp customer why we divide into flexvols.
usualy we go with 1 volume holding 1 qtree holding 1 lun for SAN or 1 volume holding several qtrees for NAS. besides that, keep the deduplication flexvol limit in mind when sizing netapp volumes, eg 4TB on a FAS3140
SnapShots are taken per Volume. So if you put Exchange, SQL and SAP in ONE big FlexVol, you will SnapShot them all at once, even if you only want to backup Exchange.
Thanks Thomas.
Is the use of Snaphots the only one reason to have multiple FlexVols, or are there more of them?
Thanks.
BR,
Danas
Space reservation, fractional reserve, snapmirror, flexclone, non disruptive volume motion, …
as it has been already said, there are several other reasons, but snapshotting is usualy the easiest way to explain a new netapp customer why we divide into flexvols.
usualy we go with 1 volume holding 1 qtree holding 1 lun for SAN or 1 volume holding several qtrees for NAS. besides that, keep the deduplication flexvol limit in mind when sizing netapp volumes, eg 4TB on a FAS3140
Thanks guys.
BTW, what is a point for using Qtree in volume if I have only one LUN and want to use all available volume's space for that LUN?
Thanks again.
Danas
A qtree does not hurt you and if you ever plan on a async replication like qtree snapmirror and/or snapvault, you do need qtrees.
You can add qtress to existing volumes and online move luns into them if its within the same volume btw:
filer> qtree create /vol/san/disk1
filer> lun move /vol/san/disk1.lun /vol/san/disk1/disk1.lun
Thanks Thomas.