for our v6210 san on the netapp disk. We are goign to be used to backup data the admin requires a his server to have (15) 10TB LUNS to backup exchnage and sharepoitn environment. Anyhow they are stuck on using DPM software for this, anyhow thats another story. I was goign to create (3) 53TB aggrs and create his LUNS. No do i create 15Volumes then one lUN per volume or a 50TB Volum with 5 (10TB luns within them)
The server is a win2k8r2. We are running 8.02P4 on the 6210. Pros/cons either way?
So the LUNs are going to be used as a backup destination by DPM? (I think I'm reading you right but feel free to correct me...)
I think one LUN per volume is usually better. It doesn't always make sense to pool logical groups of LUNs together "too soon" (that is, before the aggregate level), especially when they're quite large to begin with.
If LUNs have to be migrated somewhere, you can do it in 10TB chunks rather than 53+ TB chunks
If you're backing these volumes up to somewhere else, your backup window will be shorter if you run a bunch of smaller volumes in parallel instead of one huge volume serially
Space management / reporting is more granular
You can more easily calculate the delta of change by examining snapshot size -- if all your LUNs are in one volume, you won't be able to tell which one is changing the most
The one benefit that comes to mind of putting multiple LUNs in a single volume is that you can reap the benefits of deduplicating data shared between the LUNs. I'm sure you'd get some duplicate data, but it may not be worth the added complexity.
Are you going to use thin provisioning? (That is to say, turn off space reservation for the LUNs, set volume guarantee for none, ensure that volume autogrow and snapshot autodelete are turned on?) Or just provision everything at once and "set and forget"?
So with the multiple volumes:LUN 1:1 vs many to one there really is no Correct answer. But my opinion, in your case is a many to one setup. I would also thin provision your volumes and here's why:
DMP you will have a lot of duplicate data which is the nature of the backups. Deduplication and Compression will help you out with this tremendously.
Having multiple volumes increases complexity which is unnecessary for backups.
Make sure all of your LUN's are in qtrees and you can move the data around just as if it was in its own volume. QTree SnapMirror and SnapVault requires QTrees.
If you are offloading to tape you can backup via NDMP at the qtree level just as you would the volume level.
With DeDup and Compression enabled you might be able to fit all 150TB (15 x 10TB LUNS) into one 53TB aggr (Maybe, not a grantee or promise of any sort). If not you can grow the aggr to 162TB (8.1)
I would use a couple of separate volumes for data type and recovery point. For example, all exchange data goes in vol1 and sql/sharepoint in vol2 because of different backup schedules.
Compression is available for free with ontap 8.0 so start you PVR request as soon as you decide which way to go. I would just upgrade to 8.1.1 (soon 8.1.2) as I have been running it in a 50K user environment and it has been running flawless plus there is no "request" you have to put in to use Compression with 8.1 it is already licensed. Also I have seen significant performance increase with 8.1.1.
So as you can see there are plenty of reasons to go one way or the other you just have to choose what is important to you. Also make sure you install snapdrive on your host. It will make life easier for setting up the LUNs. It also give you the ability to run a script after a you DPM backups to take a netapp snapshot. This is better then taking an automatic snap from the controller side.
You may want to engage NetApp support in designing this solution. We just spent 3 weeks of absolute chaos with NetApp support after attempting to protect data with DPM, having snapshot volumes not auto-clear and our entire SAN filling up and taking volumes offline. Just today we were told that the reason is because we have multiple LUN's on a single volume and that NetApp considers this an "invalid" config. Based on our support experience I'm not 100% convinced he is right, but so far he sounds like he knows more about the possible cause of our issues than others.
All in all though, our support with NetApp around this DPM issue has been the worst support we've ever experienced. We were told at least once that NetApp has never done testing with DPM and does not claim that DPM will work in protected data stored on a NetApp. So you may want to consider that solution unless you have rock-star MS support to help take care of you or have some other options you can entertain.
I'm sorry to hear about your experience Micah. We actually have a lot of support around DPM. Here is a TR that goes into detail about designing a DPM storage solution using NetApp storage as the back-end: http://www.netapp.com/us/media/tr-3900.pdf