We got a customer with FAS 2040 with dual controller ,and a disk shelf with 24 SAS disks each with size 450Gb.Whenwe tried to set up that storage ,our mind has been confused with the aggr creation.When we opened up the storage each controller had a aggr0 with root volume, Our customer has the following needs , they want the most of the harddisk space they can get without any loose in the disk protection.
So we thought it would be better to have a single aggr (aggr0) with the root volume, and othe volumes are in that aggr .And to have that aggr0 with RaidDP and had it with 1 or 2 spare disks.
But since we have 2 controller is it possible to have a single aggr or when we create it like that ,is not it will be a active/passive storage situation.
Another solution was to have a one aggr in each controller as aggr0 and aggr1 with RaidDP ,and spare 1 disks for them, but this solution will cause us to loose more disks then we thought.
So we are confused a little bit with that stuff, and another question is releated with the root volume ,As far as i knew it will be better to have the root aggr and root volume it is much better way to use .In our above senarios we had the root vol in the production volumes .So what do you advice us to do with those questions.
And i want to mention that ,the fas2040 will be used as 4 or 5 tb with CIFS in the Active directory domain ,and the rest will be configured as a LUN for Vmware Clustered Servers..
As fas as I understand, you need maximum of your storage in terms of reliability and space.
What you have is FAS2040 with DS4243 ( 24-Drives @450GB) . Well 1st you should ideally divide the disk drives evenly b/w the 2.
i.e Each controller will have 12 each 450GB drives.
Then, you can have 1 Aggregate on each controller ( Aggr0 name it). This will consume 11 drives ( 9 Data discs + 2 Parity drives). Remaining 1 drive ( out of 12 drives) will act as spare for the system.
Now for the usable capacity, in each aggregate you will have 390*9= 2700GB ( Usable Capacity). Next step is to calculate Volume level space. You need to have a root volume inside the same aggregate ( aggr0) name it as vol0. For FAS2040 you can have the minimum size of root volume as 40GB for 7G. I would say keep it to 80-100GB ( This includes space required for Data ONTAP 8 minimum space requirement for root volume).
Hence, from both the systems the usable space you will get will be 2700*2 = 5400GB. ( This fits your requirement for 4-5 TB for CIFS, and rest remaining can be used for VMware).
If your customer is aware of the change in data and want to use Snapshot feature, you may consider to include the space for snapshots in your total usable capacity. Ideally for NAS volumes this is kept at 20%, but it may be less depending upon your customer requirement.
Now it is clear on our side to use 2 aggr for each controller , and a root volume on one of those aggregates.But more questions comes to our situation as we continue.
In your answer you have calculated the space left as 390*9= 2700GB , after the disk math as sparing 1 disk for 1 controller and had aggr as RaidDp ...but when i calculate 390x9=3510 ,how did you find out 2700gb ??
or am i missing somethin..
And another problem in our mind is how to backup the CIFS enviroment to tape cartiges, since our customer is a factory which is working 24 hours in a day as shifts ,there will be always a CIFS enviroment usage and this means there will be opened files by the users.We thought we could use CIFS for the network shares and then map those CIFS volumes as ISCI to a real server and then use the Backup agent ant take the backup. But when i looked in System manager i can not do that (map a volume as CIFS and also having ISCI access to that volume ). What can you advise me about this.
We will not be using Maintaince center benefits since we will be in short of disks as we use that property but anyway it is good to knew all that kind of stuff..
The best possible way to backup is to get another FAS and use Snapvault or Snapmirror.
The next option down from there is to use NDMP. You'll need an NDMP-compatible backup program (most of the major enterprise backup tools have this). If you have a tape drive you probably already have one of those. If you don't yet have a tape drive, consider buying a low-end FAS instead and using it as your backup destination.
Both of the above types of backup can handle open files. A snapshot is created at the point where the backup is being taken which avoids problems with the files changing while the backup is running.
It's not possible to expose a CIFS or NFS volume over iSCSI like that. Windows machines can obviously only understand volumes formatted with FAT or NTFS, which isn't the case on NA. An alternative would be to export an iSCSI LUN to a Windows server and use that as your CIFS server; you can access the LUN over NFS/CIFS. But that still wouldn't solve your problem as the LUN will be mounted and active while you are making the backup.
By the way, remember that you have deduplication and compression available. You'll probably find that these will save you at least 25% on your disk utilization. For people doing regular MS Office type stuff over CIFS, they're not likely to notice the overhead of compression.
Check the docs for the disk.maint_center.spares_check option. By default it's on which means there has to be at least two spares. You could, of course, turn this off to economize on disks.
However I suspect the reason the default has been set this way is to do with what would happen if you had a drive failure. If the only spare is in maintenance center, it probably has a lot of test patterns written on it rather than being properly zeroed as a spare should be.
I know that some of the discussion around Maintenance Center has been to do with it being part of the arsenal deployed by NetApp to make SATA workable in the enterprise. But it makes sense that the algorithms would work for SAS and FC drives too.
Yeah, a dual controller FAS2xxx in active-active mode without an external shelf isn't going to be efficient esp. with maintenance center. I guess the price of the scalability here is lower value on smaller deployments. That said, it still competes well with the alternative given the advantages of deduplication (and, on the 2040 with ONTAP 8, compression).
I am assuming that a disk just gets failed out if it can't be processed through maintenance center. Most customers will surely be covered through their 4 hr/NBD replace scheme anyway ?
Either NetApp has very peculiar definition of Next Business Day, or it is Best Efforts in disguise. It is not uncommon to receive NBD replacement on 5th day here … even in the region covered by NBD.
Regarding 2 spare rule – when disk is taken out of aggregate for probation, it has to be replaced by spare. If you have only one, it means you won’t have any more left. And this is not an error that should trigger replacement – you do not know whether you will need replacement until all tests are finished. So you open a window where you are (more) vulnerable to disk failure. Not something any vendor would do by default.
Worth highlighting, though, that according to the Storage Management Guide for 7.3.4 :
"If the disk.maint_center.spares_check option is set to on and fewer than two spares are available, Data ONTAP does not assign the disk to the maintenance center. It simply fails the disk and designates the disk as a broken disk."
I would imagine that this would trigger an ASUP.
This implies maintenance center is chiefly helping to avoid spurious disk replacements.