Data Backup and Recovery

New 3140

DANTHEMAN
3,579 Views

I've just received two 3140's - one for my primary site, one for my DR site and they are both identical.

So I have a few questions before they come and do the initial configuration...I do have another call planned to discuss some of these items but wanted to get some input so I sound like I know what I'm doing and can ask the right questions.

It came with 2 disk shelves - one full of 24 600GB 15K SAS drives and one full of 24 2TB SATA drives.  We are heavy on the NAS side but far from high performance.  The amount of data I have to work with is a little disproportionate to the number of users.

I'm coming off an EMC CX4-240 with FC and a Celerra gateway for NAS functionality and am trying my best to translate what I know into NetApp terms.  The 3140's have two controllers and 256GB PAM II cards.  I have a 10Gbit network infrastructure but the older Celerra gateway I have only does 1Gbit X 4.  When you include the passive data mover, these suckers take up a ton of copper.  Looking forward to removing THAT...

Right now we are still running Exchange 2003 (physical MSCS cluster) and this is really the only application that is performance-dependent and only have about 100 users.  The total DB sizes when added up are less than 200GB.  On the Clariion I'm only using a total of 8 disks to support the IO and it works without a hitch.  Supposedly Exchange 2010 reduces the IO required but I'm not there yet.  Everything else is virtual (3xHP BL495c G5's) except the backup server which uses a little bit of space (700GB) for Exchange B2D2T with Symantec BE 2010.  Already have NDMP licenses for the Celerra so I can use those for the NetApp.

So here are my questions...

An aggregate can only run on a single controller so I'll want to have at least two (one for each controller).  The size limit appears to be 16GB.  On the first controller I was planning on running all 24 600GB drives in a single aggregate to run our primary ESXi environment and Exchange systems.  How would one configure these disks as a general rule?  Using RAID-DP?  Assuming that, one HS and I have an odd number of disks (23).  Is one HS enough?  I've been reading up on using NFS instead of FC.  I already have an entire FC environment with (I think) plenty of ports.  The major point I've seen on running NFS is that dedupe will "release" the space available to ESX when doing it via NFS but not in a VMFS/FC environment.  If the block level dedupe runs against VMFS volumes, where does the free space show up - or does it show up at all?  Would you thin provision the VMFS LUNs and the space returned from the dedupe process?  I have 8Gbit FC running so 10Gbit ethernet probably would make too much difference on transport performance.

I was planning on running CIFS on the other controller with the 2TB drives.  If I'm reading things correctly, there is a 16TB limit for an aggregate so I'll need multiple aggregates, right?  Are HS's assigned to an aggregate?  I was also planning on running some test/low IO required VMs on SATA (I'm doing that now on the CX4 with R4+1 groups without any problems).  Nearly all of the NAS data on our current systems are running on SATA drives and performs adequately.  It seems to me with 2TB drives that you hit the 16TB limit with just 8 disks...two 10 disk RAID-DP sets, a hot spare, and I've got 3 disks that I'm not sure what to do with.

Currently we run two distinct AD domains that don't interact with eachother.  Currently I have 3 CIFS servers in one domain and the data is grouped in "failover" units, i.e., I can move a business function to the remote site without having to move the whole thing.  The other domain has a single CIFS server with 3TB of data that is 99% of the time read-only.  Is it possible to replicate this setup or is it time to rethink things during the migration?

We only use one CIFS server "by name" and everything else is presented via DFS.  On the one, I'll have to do a hard-cut on the name - there are only about 5 shares referenced by name but how easy is it to rename the CIFS server?

Another question about dedupe...is this at the aggregate level?  One of the issues on the Celerra (of many) is that the dedupe does not cross filesystem boundaries.  The Celerra has been able to dedupe/compress about 20% of the data.  I know it isn't apples to apples but as far as raw storgae for comparison I'm doubling in size and on the original system I have about 5TB that isn't even allocated to anything.

That's probably enough questions for now

Dan

3 REPLIES 3

rkaramchedu1
3,579 Views

An aggregate can only run on a single controller so I'll want to have at least two (one for each controller).  The size limit appears to be 16GB.  On the first controller I was planning on running all 24 600GB drives in a single aggregate to run our primary ESXi environment and Exchange systems.  How would one configure these disks as a general rule?  Using RAID-DP?  Assuming that, one HS and I have an odd number of disks (23).  Is one HS enough? 

Look at the Sytem Configuration Guide for 3140. Depending on ONTAP version you want to run, the aggr/vol limits are different. If you go with 8.0.2, then you can have 64-bit aggregates and make them larger - 50TB, IIRC. The link to SysConfig Guide is http://now.netapp.com/NOW/knowledge/docs/hardware/NetApp/syscfg/scdot802/index.htm

I wouldn't do anything other than RAID-DP.

NetApp allows you to run with 1 Hot Spare per controller. If you enable "disk maintenance center", you'll need two hot spares. That'll allow NetApp to proactively copy out blocks from disks that are going suspect and allow you to switch out drives. Some reading here on these topics

http://partners.netapp.com/go/techontap/matl/storage_resiliency.html

http://media.netapp.com/documents/tr-3437.pdf

http://www.netapp.com/us/library/technical-reports/tr-3786.html

http://now.netapp.com/NOW/knowledge/docs/bpg/ontap_plat_stor/data_avail.shtml

I've been reading up on using NFS instead of FC.  I already have an entire FC environment with (I think) plenty of ports.  The major point I've seen on running NFS is that dedupe will "release" the space available to ESX when doing it via NFS but not in a VMFS/FC environment.  If the block level dedupe runs against VMFS volumes, where does the free space show up - or does it show up at all?  Would you thin provision the VMFS LUNs and the space returned from the dedupe process?  I have 8Gbit FC running so 10Gbit ethernet probably would make too much difference on transport performance.

Depending on the application, you would be using either iSCSI or NFS. For SME (SnapManager for Exchange), it does have the ability to manage thin provisioning etc. Not sure how much you would get out from de-dup. More discussion is probably in the offing.

I was planning on running CIFS on the other controller with the 2TB drives.  If I'm reading things correctly, there is a 16TB limit for an aggregate so I'll need multiple aggregates, right?  Are HS's assigned to an aggregate?  I was also planning on running some test/low IO required VMs on SATA (I'm doing that now on the CX4 with R4+1 groups without any problems).  Nearly all of the NAS data on our current systems are running on SATA drives and performs adequately.  It seems to me with 2TB drives that you hit the 16TB limit with just 8 disks...two 10 disk RAID-DP sets, a hot spare, and I've got 3 disks that I'm not sure what to do with.

Again, look at the links above to get some insight and recommendations on hot spares, aggregate sizing etc.

Currently we run two distinct AD domains that don't interact with eachother.  Currently I have 3 CIFS servers in one domain and the data is grouped in "failover" units, i.e., I can move a business function to the remote site without having to move the whole thing.  The other domain has a single CIFS server with 3TB of data that is 99% of the time read-only.  Is it possible to replicate this setup or is it time to rethink things during the migration?

Multiple domains are possible using MultiStore (vfiler functionality)

We only use one CIFS server "by name" and everything else is presented via DFS.  On the one, I'll have to do a hard-cut on the name - there are only about 5 shares referenced by name but how easy is it to rename the CIFS server?

If that file server is not doing anything else and that name can be taken away from it, then you can transfer that name to the NetApp system using the netbios aliases feature.

Another question about dedupe...is this at the aggregate level?  One of the issues on the Celerra (of many) is that the dedupe does not cross filesystem boundaries.  The Celerra has been able to dedupe/compress about 20% of the data.  I know it isn't apples to apples but as far as raw storgae for comparison I'm doubling in size and on the original system I have about 5TB that isn't even allocated to anything.

De-Dupe is at volume level.

HTH - I'll let others chime in with more info/direction.

rajeev

DANTHEMAN
3,579 Views

Thanks for the info.  I think I just figured out why I'm having trouble finding things - I can't access NOW or any of the docs therein.  Perhaps when I get that rectified I won't have as many questions

DANTHEMAN
3,579 Views

Ok - been doing some reading...

Running Ontap 8, I can have 64 bit aggregates and up to 50TB aggregates.

Using RAID-DP, the SAS drives (if I'm reading this right) can be in a single RAID group up to 28 disks - I have 24 out of the chute. Is there a reason I would not go with a 22-disk RG/RAID-DP (and have two HS's)? This will max out the capacity but not sure what the downside would be.

For the SATA drives, again I have 24. Max RG size for RAID-DP is 20. With 2TB drives that seems a little scary on the rebuild time and there would be four drives sitting on the bench. Would two RAID-DP drives with 11 drives in each be a better way to go? I'd have two HS's so I can stil have the "disk maintenance center" (which looks like pro-active hot sparing by another name) and have the same capacity as a huge 20 drive RAID group. I'm coming from an EMC background here so I have to ask if odd-numbered RAID groups work with RAID-DP - from what I've read it doesn't seem like a problem.

Can't quite figure out what the difference between '7-Mode HA' and "cluster mode" and when I'd go one way or the other with a dual-controller 3140 (I have a second 3140 that'll be installed about 120 miles away - so not a metrocluster). I've read this http://communities.netapp.com/docs/DOC-9270 already.

You mentioned I'd be using iSCSI or NFS - I already have a complete fiber channel infrastructure in place. What is the reason I would abandon this to go with iSCSI for block storage? I can see the point with NFS and being able to recover space via dedupe but iSCSI isn't even on my radar.

So being new to this...

Disks go into raid groups

Raid groups go into aggregates

FlexVols (volumes) are created on top of aggregates

LUNs and/or file systems (NFS/CIFS shares) are created in FlexVols (volumes)

Replications, dedupe, and snapshots are at the volume level

Dan

Public