We have a FAS250 (Single Controller) which is currently serving CIFS shares and iSCSI LUNs for our XenServer cluster. I've now replaced this box with a better filer and so would love to repurpose it for our testing\dev environment.
The current setup is: 14x300GB FC 10k disks. 2 aggregates - agg0 (7 disks RAID4) and agg1 (6 disks RAID4) and a hot spare.
I'd like to reset the whole filer and create one aggregate with 13 disks - RAID-DP and a hot spare.
1. How do I reset the filer to run setup again?
2. Would there be any problems with the new setup I've specified?
3. The FAS250 has 2 NICs, does anyone know if it supports NIC teaming or iSCSI multipathing?
Sounds ideal -- down the road you could even convert the 250 into a regular disk shelf.
For your questions.....
Console into the filer, reboot, boot into Maintenance Mode via Control-C at the right time (watch the prompts), and then use the option to make a new aggregate (destroying everything existing) with a flexible root volume. On first boot you can walk throug the setup script and then expand the volume. These KB's cover it pretty well.
Quick word of experience. If you are going to "4a" your filer, take note of the licenses first. I know you can grab them from the NOW site, but it's a lot easier to jot them down in notepad or somewhere safe first, then you can just copy them back in. But yeah, "4a" is the best way to start afresh!
Great -- if you've gone to 7.2.x going to 7.3.x will be relatively simple (knowing the "software install/update" commands is good as that will be the only supported method at some point in the future).
And...I do indeed mean Maintenance Center (googling it made me realize that Disk isn't in the official name). It's basically a process where a filer head can take a disk offline and do the same checks that the manufacturer does when they get back a failed disk (those failed disks are usually rehabilitated and marked "refurbished" since they're actually fine).
It's mentioned in this article.... (good overall read)
The best explanation I can find right now is actually Val Bercovici's reply on a non-NetApp blog (his whole reply is well worth reading.....this is just the part that hits on Maintenance Center).
[Drive resurrection] If there’s one thing we’ve learned as a result of the massive real-world drive behavior data warehouse we’ve accumulated – it’s that there’s no simple pattern to predict when a drive will fail. But by far our most significant discovery is that drive failures are actually no longer the simple atomic and persistent occurrences they used to be a few short years ago. There are in fact many circumstances not restricted to age, environmentals (NVH), power & cooling, or even electro-mechanical behavior of drive peers within the same array, which can render a drive unusable – and eventually failed. One of the most fascinating Oscar-worthy plot-twists that we’ve uncovered as a result of our vast experience is that drives can also come back from the dead to lead very normal and productive lives! Industry-leading innovation we’ve been shipping with NetApp Maintenance Center allows a NetApp array to use algorithms derived from our aforementioned data warehouse to take intelligent proactive actions such as:
Predict which drive(s) are likely to fail (using advanced algorithms based on our vast data warehouse).
Copy readable data directly from the failing spindle onto a global hot spare without parity reconstruct overhead.
Use RAID-DP parity to calculate the remaining subset of unreadable data (usually a very small percentage of the overall drive).
Take the suspected “failed” drive offline (while physically maintaining it in the array) and probe said drive with low-level diagnostics to determine whether the failure was transitory or truly and permanently fatal.
Return fixed drives which exhibited only single-instances of transitory errors back to the global hot spare pool.
Although we’ve only been collecting statistics on the advanced Maintenance Center functionality for about a year now, our assumptions have been validated in that the vast majority of “failed” drives only exhibit isolated incidents of transitory errors and can safely remain in the array while rejoining the spares pool. It should be noted that these drives don’t get a second chance at a second life :-). Should those same drives fail again in any manner, they are marked for physical removal from the array and considered permanently failed.
This can be annoying on the smaller boxes, but from experience the FAS250 doesn't scale too well beyond 2TB, so you'd probably suffer across the entire system even if you could grow beyond that. But still can be a painful limitation sometimes.
<edit that bit as I realise you said you're installing 7.3.x already!>
I feel your pain....the 2020 has some similar limitations (although they're mostly resolved in later releases of ONTap).
As Chris mentioned, the 250 just didn't have much horsepower. When it came out 144 GB drives were considered big but it stayed around long enough to pick up the 300's (which does become an issue for aggregate size 😕 ).