2009-03-12 02:40 PM
We have a FAS250 (Single Controller) which is currently serving CIFS shares and iSCSI LUNs for our XenServer cluster. I've now replaced this box with a better filer and so would love to repurpose it for our testing\dev environment.
The current setup is: 14x300GB FC 10k disks. 2 aggregates - agg0 (7 disks RAID4) and agg1 (6 disks RAID4) and a hot spare.
I'd like to reset the whole filer and create one aggregate with 13 disks - RAID-DP and a hot spare.
1. How do I reset the filer to run setup again?
2. Would there be any problems with the new setup I've specified?
3. The FAS250 has 2 NICs, does anyone know if it supports NIC teaming or iSCSI multipathing?
2009-03-12 07:09 PM
Sounds ideal -- down the road you could even convert the 250 into a regular disk shelf.
For your questions.....
2009-03-12 07:28 PM
The answers to your question follows
>1. How do I reset the filer to run setup again?
There are two ways to do this.
1. Initialize all disk and using the 4a options
To get the 4a option do the following
a) Connect to console of the filer and Halt the system using halt command
b) Now say bye from CFE prompt
c) During the booting time you find the availability of Click Ctrl+C to get the Special boot menu
d) Now press Crtl+C (may be one or two times)
4) Select 4(a) options this will initialize all the disks ( Zero fill) and create an aggregate and root flexible volume for you
5) Once the initialization is done you will ask for the Setup
a) Destroy all the aggregates and add the disks to the existing aggr0 (Root)
b) Now if you want to reset the configuration in priv set advanced (halt -c factory)
>2. Would there be any problems with the new setup I've specified?
Method 1 will clean everything in the filer.You need to configure/setup everything from scratch. Also re-installation of Data ONTAP release
will be required to launch Filerview.
Method 2 Will clean configuration and keep the filer in Factory default mode. You can configure the filer using setup command. Less painful
compare with the method 1.
>>3. The FAS250 has 2 NICs, does anyone know if it supports NIC teaming or iSCSI multipathing?
Please refer the following
2009-03-13 12:24 AM
Quick word of experience. If you are going to "4a" your filer, take note of the licenses first. I know you can grab them from the NOW site, but it's a lot easier to jot them down in notepad or somewhere safe first, then you can just copy them back in. But yeah, "4a" is the best way to start afresh!
2009-03-13 03:12 AM
Thank you everyone!
"I might start with 2 spare disks to get Disk Maintenance Center"
What is disk maintenance center?
I will upgrade to 7.3.1 before I do it, am on 7.2.5 at the moment.
And will I be ok just having one aggregate on the filer?
Thanks again! It's so hard to find these articles on the netapp site!
2009-03-13 03:27 AM
I think he means "Disk maintenance mode" in the special boot menu options. Hope you are aware of "software update" command (web update).
That’s a pretty easy method to do upgrade.
2009-03-13 12:41 PM
Great -- if you've gone to 7.2.x going to 7.3.x will be relatively simple (knowing the "software install/update" commands is good as that will be the only supported method at some point in the future).
And...I do indeed mean Maintenance Center (googling it made me realize that Disk isn't in the official name). It's basically a process where a filer head can take a disk offline and do the same checks that the manufacturer does when they get back a failed disk (those failed disks are usually rehabilitated and marked "refurbished" since they're actually fine).
It's mentioned in this article.... (good overall read)
The best explanation I can find right now is actually Val Bercovici's reply on a non-NetApp blog (his whole reply is well worth reading.....this is just the part that hits on Maintenance Center).
If there’s one thing we’ve learned as a result of the massive real-world drive behavior data warehouse we’ve accumulated – it’s that there’s no simple pattern to predict when a drive will fail. But by far our most significant discovery is that drive failures are actually no longer the simple atomic and persistent occurrences they used to be a few short years ago. There are in fact many circumstances not restricted to age, environmentals (NVH), power & cooling, or even electro-mechanical behavior of drive peers within the same array, which can render a drive unusable – and eventually failed. One of the most fascinating Oscar-worthy plot-twists that we’ve uncovered as a result of our vast experience is that drives can also come back from the dead to lead very normal and productive lives! Industry-leading innovation we’ve been shipping with NetApp Maintenance Center allows a NetApp array to use algorithms derived from our aforementioned data warehouse to take intelligent proactive actions such as:
Although we’ve only been collecting statistics on the advanced Maintenance Center functionality for about a year now, our assumptions have been validated in that the vast majority of “failed” drives only exhibit isolated incidents of transitory errors and can safely remain in the array while rejoining the spares pool. It should be noted that these drives don’t get a second chance at a second life :-). Should those same drives fail again in any manner, they are marked for physical removal from the array and considered permanently failed.
2009-06-09 06:04 AM
This can be annoying on the smaller boxes, but from experience the FAS250 doesn't scale too well beyond 2TB, so you'd probably suffer across the entire system even if you could grow beyond that. But still can be a painful limitation sometimes.
<edit that bit as I realise you said you're installing 7.3.x already!>