Network and Storage Protocols

Resetting and repurposing a FAS250

adgistics
12,638 Views

We have a FAS250 (Single Controller) which is currently serving CIFS shares and iSCSI LUNs for our XenServer cluster. I've now replaced this box with a better filer and so would love to repurpose it for our testing\dev environment.

The current setup is: 14x300GB FC 10k disks. 2 aggregates - agg0 (7 disks RAID4) and agg1 (6 disks RAID4) and a hot spare.

I'd like to reset the whole filer and create one aggregate with 13 disks - RAID-DP and a hot spare.

1. How do I reset the filer to run setup again?

2. Would there be any problems with the new setup I've specified?

3. The FAS250 has 2 NICs, does anyone know if it supports NIC teaming or iSCSI multipathing?

Thanks!

10 REPLIES 10

amiller_1
12,589 Views

Sounds ideal -- down the road you could even convert the 250 into a regular disk shelf.

For your questions.....

  1. Console into the filer, reboot, boot into Maintenance Mode via Control-C at the right time (watch the prompts), and then use the option to make a new aggregate (destroying everything existing) with a flexible root volume. On first boot you can walk throug the setup script and then expand the volume. These KB's cover it pretty well.
  2. The setup sounds idea -- I might start with 2 spare disks to get Disk Maintenance Center and then add the 13th disk as/if needed.
  3. Should be simple enough via the initial "setup" script given a current version of Data ONTap.

danielpr
12,589 Views

Hi James,

The answers to your question follows

>1. How do I reset the filer to run setup again?

There are two ways to do this.

1. Initialize all disk and using the 4a options

  To get the 4a option do the following

  a)  Connect to console of the filer and Halt the system using halt command

  b) Now say bye from CFE prompt

  c) During the booting time you find the availability of Click Ctrl+C to get the Special boot menu

  d) Now press Crtl+C  (may be one or two times)

  4) Select 4(a) options this will initialize all the disks ( Zero fill) and create an aggregate and root flexible volume for you

  5) Once the initialization is done you will ask for the Setup

2.

    a) Destroy all the aggregates and add the disks to the existing aggr0 (Root)

    b) Now if you want to reset the configuration in priv set advanced (halt -c factory)

>2. Would there be any problems with the new setup I've specified?

    Method 1 will clean everything in the filer.You need to configure/setup everything from scratch. Also re-installation of Data ONTAP release

   will be required to launch Filerview.

   Method 2  Will clean configuration and keep the filer in Factory default mode. You can configure the filer using setup command. Less painful

   compare with the method 1.

>>3. The FAS250 has 2 NICs, does anyone know if it supports NIC teaming or iSCSI multipathing?

    Please refer the following

    http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/

    http://media.netapp.com/documents/tr-3441.pdf

    http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/NetAppInteroperability_Jan2009_SANiSAN.xls

Thanks

Daniel

chriskranz
12,589 Views

Quick word of experience. If you are going to "4a" your filer, take note of the licenses first. I know you can grab them from the NOW site, but it's a lot easier to jot them down in notepad or somewhere safe first, then you can just copy them back in. But yeah, "4a" is the best way to start afresh!

adgistics
12,589 Views

Thank you everyone!

"I might start with 2 spare disks to get Disk Maintenance Center"

What is disk maintenance center?

I will upgrade to 7.3.1 before I do it, am on 7.2.5 at the moment.

And will I be ok just having one aggregate on the filer?

Thanks again! It's so hard to find these articles on the netapp site!

danielpr
12,589 Views

Maintenance center?>>

I think he means "Disk maintenance mode" in the special boot menu options. Hope you are aware of "software update" command (web update).

That’s a pretty easy method to do upgrade.

~Daniel

adgistics
12,589 Views

Yeah, i've done an upgrade from 7.0.4 to 7.2.5.1 before so i should be ok, i'll just follow the instructions on the netapp site

amiller_1
12,589 Views

Great -- if you've gone to 7.2.x going to 7.3.x will be relatively simple (knowing the "software install/update" commands is good as that will be the only supported method at some point in the future).

And...I do indeed mean Maintenance Center (googling it made me realize that Disk isn't in the official name). It's basically a process where a filer head can take a disk offline and do the same checks that the manufacturer does when they get back a failed disk (those failed disks are usually rehabilitated and marked "refurbished" since they're actually fine).

It's mentioned in this article.... (good overall read)

http://partners.netapp.com/go/techontap/matl/storage_resiliency.html

The best explanation I can find right now is actually Val Bercovici's reply on a non-NetApp blog (his whole reply is well worth reading.....this is just the part that hits on Maintenance Center).

[Drive resurrection]
If there’s one thing we’ve learned as a result of the massive real-world drive behavior data warehouse we’ve accumulated – it’s that there’s no simple pattern to predict when a drive will fail. But by far our most significant discovery is that drive failures are actually no longer the simple atomic and persistent occurrences they used to be a few short years ago. There are in fact many circumstances not restricted to age, environmentals (NVH), power & cooling, or even electro-mechanical behavior of drive peers within the same array, which can render a drive unusable – and eventually failed. One of the most fascinating Oscar-worthy plot-twists that we’ve uncovered as a result of our vast experience is that drives can also come back from the dead to lead very normal and productive lives! Industry-leading innovation we’ve been shipping with NetApp Maintenance Center allows a NetApp array to use algorithms derived from our aforementioned data warehouse to take intelligent proactive actions such as:

  1. Predict which drive(s) are likely to fail (using advanced algorithms based on our vast data warehouse).
  2. Copy readable data directly from the failing spindle onto a global hot spare without parity reconstruct overhead.
  3. Use RAID-DP parity to calculate the remaining subset of unreadable data (usually a very small percentage of the overall drive).
  4. Take the suspected “failed” drive offline (while physically maintaining it in the array) and probe said drive with low-level diagnostics to determine whether the failure was transitory or truly and permanently fatal.
  5. Return fixed drives which exhibited only single-instances of transitory errors back to the global hot spare pool.

Although we’ve only been collecting statistics on the advanced Maintenance Center functionality for about a year now, our assumptions have been validated in that the vast majority of “failed” drives only exhibit isolated incidents of transitory errors and can safely remain in the array while rejoining the spares pool. It should be noted that these drives don’t get a second chance at a second life :-). Should those same drives fail again in any manner, they are marked for physical removal from the array and considered permanently failed.



http://storagemojo.com/2007/02/26/netapp-weighs-in-on-disks/

adgistics
12,589 Views

Half way through the process......

grrrrr....another stupid NetApp limitation. Max 2TB per aggregate!

chriskranz
12,589 Views

This can be annoying on the smaller boxes, but from experience the FAS250 doesn't scale too well beyond 2TB, so you'd probably suffer across the entire system even if you could grow beyond that. But still can be a painful limitation sometimes.

<edit that bit as I realise you said you're installing 7.3.x already!>

amiller_1
6,830 Views

I feel your pain....the 2020 has some similar limitations (although they're mostly resolved in later releases of ONTap).

As Chris mentioned, the 250 just didn't have much horsepower. When it came out 144 GB drives were considered big but it stayed around long enough to pick up the 300's (which does become an issue for aggregate size 😕 ).

Public