ONTAP Discussions

Storage Design; planning the Raid Groups and Storage Hierarchy. Aggregates, FlexVols, QTree

stuengland
4,802 Views

Hi,

This will be my first NetApp implementation. I am under some pressure to get the storage design done, I have created a NOW account but since I don’t own any hardware yet I am unable to access the majority of the content on the NOW site. I have contacted my supplier to try and help me while I wait for the delivery. Many of these questions could be answered if this process was a bit better, or access to simple things like administrative guides were made public? When I contacted NetApp support I was told to contact my regional representative.Who I have spoken to the past already and is not helpful

Anyway, on to my questions...

Note: 96% of our servers are hosted on VMware

I don’t have any opportunity to change what’s been ordered, so I need to work with this list of kit

HARDWARE CONFIGURATION

Primary Site

FAS3240 (two heads)

3 X 450GB Shelves (72 disks)

1 X 1TB Shelf fully (24disks)

NFS

CIFS

All extended software

Secondary Site

FAS2040

1 X 2TB Shelf (12 disks)

NFS

SW-BASE-PK

I plan to create the Raid Groups as follows

Primary Site


Aggr0

3 Raid Groups configured as: Spare, Parity, Parity, 15 X Data

This gives a total of 27TB usable

Aggr1

2 Raid Groups configured as:Spare, Parity, Parity, 9 X 1TB

This gives a total of 18TB usable

My intention is to use Aggr0 for our production, development and test systems. I will apply minimal de-duplication only to systems which automatically create versions of many documents and Symantec enterprise vault. 27TB is more than enough for me to host these systems on

I want to turn Aggr1 into a SnapVault for the Production systems and some development systems. My understanding is that a SnapVault is literally a dumping area for snapshots of systems. I wish to enable de duplication on this area of storage to increase the number of snapshots I can keep as many operating system snapshots will be kept, with a lot of common data

Secondary Site


Aggr0

1 RaidGroup configured as: Spare, Parity, Parity, 9 X 2TB

This gives a total of 18TB usable

I wish to turn this into SnapMirror destination for Aggr1 in the Primary Site

In the event of a DR situation we will order in more disk to restore the snapshots too and connected VMware hosts to the Secondary Site filer

Questions:

  1. Does this make sense? Is my understanding of the various technologies making sense? Are my raid groups ok?
  2. It is possible to “switch on” a snapshot that is held inside a SnapVault or does the snapshot have to be moved somewhere else first?
  3. What is SW-BASE-PK?

Storage layout


I need to keep various data sets of varying criticality

Production VMware operating systems and local storage data for applications. Not LUNS, VMDK’s

Development VMware operating systems of the above

Test VMware operating systems and associated data

Direct CIFS shares for a document management system, users personal home directories and a few applications that run from CIFS shares

Each of these different categories of data are of varying criticality. I wish to control snapshot schedules, retention schedules etcdifferently on these different types of data. For this reason I was thinking of creating a separate FlexVol for each of the different data types (operating systems, data, CIFS shares) as it’s my understanding that De Duplication occursat a FlexVol level and not an aggregate level, as previously mentioned I do not believe I will require deduplication but I wish to plan for the future. I donot have any need for quotas

I was thinking of something like this:

/aggr0/FlexVol1(Operating Systems)/Production ß Critical SnapShotSchedule

/aggr0/FlexVol1(Operating Systems)/Devlopment ß Development SnapShotSchedule

/aggr0/FlexVol1(Operating Systems)/Test

/aggr0/FlexVol2(Data)/Production ß Critical SnapShot Schedule

/aggr0/FlexVol2(Data)/Devlopment ß Development SnapShot Schedule

/aggr0/FlexVol2(Data)/Test

/aggr0/FlexVol3(CIFS)/Production ß Critical SnapShot Schedule

/aggr0/FlexVol3(CIFS)/Devlopment ß Development SnapShot Schedule

/aggr0/FlexVol3(CIFS)/Test

Questions:

  1. Again, does this make sense? Am I understanding the relevant technologies correctly? Is this the best way to do things?
  2. Am I correct in my understanding that deduplication occurs at FlexVol level?
  3. Can I create snapshot schedules and SnapVault schedules on a directory within a FlexVol?

PS  I am having space bar issues so please excuse any typos

Thanks

Stuart

1 ACCEPTED SOLUTION

bsti
4,802 Views

I believe the 3-disk root aggr will come out of the disks you listed in the order, but I'd double-check with your sales people.  I know the 2040 has the capacity for internal disk ,but I'm not sure about the 3240.  I'd definitely verify that, but my guess is you don't have any.

You can only snapshot at the volume level.  However, Snapvault is handled at the qtree level (which is a logical unit underneath the volume level).  You can create different transfer schedules in Snapvault at the qtree level.  Your layout would be one volume, with qtree underneath the volume for each Prod/Test/Dev function you wanted.  It's confusing, but each time you take a backup of  a qtree, you're actually taking a snapshot of the whole volume.  From what I i'm reading though, I think you are more concerned about the scheduling granularity.

I think you'd end up with something like this:

/aggr1/FlexVol1(Operating Systems)/Production_Qtree 

/aggr1/FlexVol1(Operating Systems)/Development_Qtree

...

View solution in original post

8 REPLIES 8

bsti
4,802 Views

Stuart,

Each of your controllers will likely have a built-in 3-disk aggr0, which will contain vol0, which contains the root volume.  I would recommend against putting anything else in this aggregate, create new ones for your data instead.  Also, you need to make sure you keep probably 2 spares of each size/kind of disk you have (450GB and 1TB).  These spares should not be assigned to aggregates.

Another thing to consider is to not mix disk types.  You'll have to confirm for me, but I'm thinking your 450 GB disks are FC and your 1TB disks are SATA.  You proabably do NOT want to mix them in the same aggregate.  To be honest, ONTAP may not even let you.  You should create one aggr with FC disks and another with SATA.

If you are using VMWare, make sure you slice up your volumes such that the same types of OSs are on the same volumes, then leverage A-SIS.  In my environment, I see some pretty impressive dedupe numbers when I do this, and the performance penalty is unnoticeable in our environment. You will want to test on your own to be sure but I think you will be pleasantly surprised by the space savings numbers.

SnapVault is a backup solution that leverages volume/qtree snapshots to store versions of your volumes over time.  Your first "backup" takes an entire copy of your source volume and stores it on your SV Secondary volume.  Then, subsequent snapshots only transfer the changed blocks to the secondary.  You can restore data from any point-in-time you took a snapshot.  You don't have to "copy" the snapshots anywhere before you restore data from them.  You will proabably create a Flex Clone (writeable snapshot-based volume clone) that contains the data as of the snapshot, mount it to a server, and pull out  the data you want.  You may want to consider using your fast FC disks for your Production/test/Dev aggr and your slower SATA disks for your Snapvault Secondary storage. 

As for your Volume layout, you are correct in that you manage Snapvault snapshots at the volume level.  What you may want consider though is that you are also correct in that dedupe is at the volume level.  You may gain more from putting the Test/Dev and production OSs in one volume and leveraging dedupe than you would save by keeping them separate and snapping them less often.  You gain no performance benefit by separating them into different volumes in the same aggregate, so it's worth thinking about.

Hope that helps!

ITINFSERV
4,802 Views

Thanks for your reply

In my order I see

FAS3240AIBBASER6, FAS3240, HA System with Controller

and

SW3240ACOMPBNDLC, SW, Complete BNDL,3240A,C

You mention that the controller will come with a three disk aggr0. Will the disks be inside the controller or will I be losing 3 disks per head of my 450GB disks? Only leaving me with 18 X 450GB disks?

I had planned on having ALL production data (incl dev/test) on the fast SAS drives. The 1TB SATA will be used exclusively as a SnapVault. The SnapVault will be replicated to the DR site where there is more SATA

As for the Volume layout. You confirm that de-duplication and snapshot schedules both happen at Volume level. In my example

/aggr0/FlexVol1(Operating Systems)/Production ß Critical SnapShotSchedule

Could I not apply de-duplication at the FlexVol1 level and the snapshot schedules a level further at the Production/Devlopment? Or is nesting Volumes not allowed? Or alternatively, can snapshot schedules be applied to directories rather than volumes?

bsti
4,803 Views

I believe the 3-disk root aggr will come out of the disks you listed in the order, but I'd double-check with your sales people.  I know the 2040 has the capacity for internal disk ,but I'm not sure about the 3240.  I'd definitely verify that, but my guess is you don't have any.

You can only snapshot at the volume level.  However, Snapvault is handled at the qtree level (which is a logical unit underneath the volume level).  You can create different transfer schedules in Snapvault at the qtree level.  Your layout would be one volume, with qtree underneath the volume for each Prod/Test/Dev function you wanted.  It's confusing, but each time you take a backup of  a qtree, you're actually taking a snapshot of the whole volume.  From what I i'm reading though, I think you are more concerned about the scheduling granularity.

I think you'd end up with something like this:

/aggr1/FlexVol1(Operating Systems)/Production_Qtree 

/aggr1/FlexVol1(Operating Systems)/Development_Qtree

...

ITINFSERV
4,802 Views

thanks again for your response

are there any considerations I should be aware of when dealing with Qtrees. From what I have read so far, Qtrees do not have any limits on number of files or space used so this should be completely transparent for me from a managment point of view but give the ability to schedule snapshots to the snapvault at a qtree level

Do you have any experience with SnapMirror? And do you have any idea what SnapVault "looks" like? Will I simply tell SnapVault to take snapshots of my Qtree's and then tell SnapMirror to mirror my SnapVault?

ITINFSERV
4,802 Views

Sorry another question. Is there some way of creating a logical level of seperation between the aggregate and the flexvol?

for example if I wanted to have

/aggr2/NFS/FlexVol1/Qtree1

/aggr2/NFS/FlexVol2/Qtree1

aggr2/CIFS/FlexVol1/Qtree1

Is that possible? If so what are "NFS" and "CIFS" called?

And could I have /aggr2/NFS/VMWARE/FlexVol1/Qtree1 ?

bsti
4,802 Views

I don't use qtrees on a day-to-day basis, so I'm not 100% familiar with all of the caveats of qtrees, but my understanding is that they are just a logical level of management underneath a volume.  I'm not aware of any limitations, etc.. that are specific to qtrees. 

Snapvault has a GUI if you use Protection Manager (part of Operations Manager, now OnCommand).  I think it's free.  You can setup your source and destination aggregates, monitor space usage, setup SV relationships and replication, through Protection manager.  You can also script all of that via Powershell or SSH.  There are several ways to do it.  Protection Manager is probably the most full-featured option, and requires the least work to implement. In your case, you would setup a SV relationship between your primary (source) and secondary, then setup a SM relationship to mirror it to your secondary site.  This would all be setup as a job in Protection Manager.  It takes some learning to get used to, but it's not too bad once you get into it.

I'm very familiar with Snapmirror.  We live and die on it here.

CIFS and NFS are just protocols you use to access your data.  They aren't storage per se, so you really can't use them as a logical separation piece.  When you create your LUN/Qtrees, you would specify FCP, NFS or CIFS.  So your diagram would look more like this:

/aggr2/FlexVol1/NFS_Qtree1

/aggr2/FlexVol1/CIFS_Qtree1

/aggr2/FlexVol2/NFS_Qtree1

Hope that answers your questions.

stuengland
4,802 Views

Sorry, I am still trying to decide whether I wish I to go down the /aggr1/operating system/production route or the /aggr1/production/operating system route

Other things I have to take into consideration is that VMware/NetApp best practice states that every Datastore should be a seperate FlexVol

A quetion I have is, at what level can you create a NFS share? Can you share a FlexVol? Or can you only share a Qtree or sub folder within a FlexVol?

radek_kubka
4,802 Views

at what level can you create a NFS share? Can you share a FlexVol? Or can you only share a Qtree or sub folder within a FlexVol?

You can do all combinations:

http://now.netapp.com/NOW/knowledge/docs/ontap/rel736/html/ontap/cmdref/man1/na_exportfs.1.htm

Public