Network and Storage Protocols

FAS2020 Setup

it_7
11,845 Views

Hiya,

We've just powered up our FAS2020 box and run through the initial CLI config. Now moved to System Manager and we're having trouble configuring the storage.

We have a dual controller model with 12x 450GB disks on RAID-DP with HS. If my understanding is correct, we lose 2 disks per controller for RAID-DP and 1 disk per controller for the spares. That should leave us with ~2.7TB of storage (6x 450GB). However, the default aggregate is over 250GB in size and it's telling us we can't create more as we don't have any disks available.

So what has happened to the other 2.5TB or so of storage?

Also, what is the recommended controller IP config for an active-active implementation? All on one subnet or all on seperate subnets? We are putting each controller port on dedicated switches.

Thanks

1 ACCEPTED SOLUTION

radek_kubka
11,807 Views

Cool. It means that only 8 of your disks are assigned, 4 per each controller.

If you want to have an even number of disks per controller, then you should add out of four remaining disks a couple to each controller (type "disk assign ?" in the CLI for more detailed instructions).

And in a case it is still confusing - whenever you access any of the controllers, you can mainipulate only with disks owned by it, so you have to repeat all the tasks on both controllers.

Regards,

Radek

View solution in original post

38 REPLIES 38

radek_kubka
9,619 Views

Hi and welcome to the Communities!

Quite likely some of your disks are still unowned. You can check this & do assignment via CLI, disk command. Then the next step will be to add disks to your aggregates.

Bear in mind though that these days GUI tools enforce two hot spares per controller (e.g. to enable 'silent' replacement of drives before they fail), so if you want to have just one hot spare per head, you need to use CLI again.

Regards,

Radek

tirtha
9,915 Views

Hi,

It should not happen that way. It wil be helpful if you can paste the actual error message thrown by system Manager.

One thing u can validate is that check the disks have been assigned correctly to the controllers. Go to the Disks page for the controller u trying to create another aggregate and check that at least 3 disks (for DP) are coming in the state "spare" for that controller and rest may be in the state "partner".

As against the point of controller IP config, in order to have successful failover, each network interface on a filer must have an equivalent interface on its partner which means must be physically connected to the same subnet and of same technology. So each primary network address of a filer must have an identical secondary address on an equivalent interface on its partner.

In case of multistores only, i guess there may be an exception and special ip addresses may be necessary, doubtful about the exact provision in such case.

Thanks

-Tirtha

tirtha
9,915 Views

Saw radek's update later. .

But surely that can be the case as Radek said, if some disks are still unowned, all the 12 disks will not be listed over the Disk page in System Manager and u have to go to CLI to assign them.

it_7
9,918 Views

I'm looking at System Manager now and this is what its reporting about the disks:

0c.00.0     present     15000     410.73GB     aggr0

0c.00.1     partner     15000     410.73GB

0c.00.2     present     15000     410.73GB     aggr0

0c.00.3     partner     15000     410.73GB

0c.00.4     present     15000     410.73GB     aggr0

0c.00.5     partner     15000     410.73GB

0c.00.6     spare     15000     410.73GB

0c.00.7     partner     15000     410.73GB

I can not do anything with these disks, ie choose ownership etc.

Thanks again

radek_kubka
11,808 Views

Cool. It means that only 8 of your disks are assigned, 4 per each controller.

If you want to have an even number of disks per controller, then you should add out of four remaining disks a couple to each controller (type "disk assign ?" in the CLI for more detailed instructions).

And in a case it is still confusing - whenever you access any of the controllers, you can mainipulate only with disks owned by it, so you have to repeat all the tasks on both controllers.

Regards,

Radek

it_7
9,619 Views

OK, just playing with the CLI and assigned all unowned disks to one controller to see how it works.

I know want to assign 2 of those disks to the other controller, but it says it can't as they're owned by the partner.

tirtha
9,619 Views

This is because on the fly u can not swap the ownership of disks between the partners. Now i think u have to reboot the controllers, go to maintainence mode  and do the disk assign.

it_7
9,619 Views

Oh I have also put all disks in aggr0 on controller 1 to see how the system in general works. I cannot see a way of taking these disks out to then be put in aggr0 on the 2nd controller.

Is there a way to reset to unit back to factory defaults? We have no data on there as this is entirely a getting to grips excercise at the moment!

Thanks

tirtha
6,889 Views
Is there a way to reset to unit back to factory defaults? We have no data on there as this is entirely a getting to grips excercise at the moment!

You can do that by doing "priv set advanced" in CLI and then issue "halt -c factory". The filer will get reset to factory default settings, be cautious, all configurations will also be gone.

it_7
6,889 Views

I've run the factory reset command and now the controller is in an infinate loop... it says the partner has taken ownership and then goes back to the autoboot command.

This keeps happening? If I reset it to factory, how does it know about its partner?

tirtha
6,319 Views

it@elkfife.com

I've run the factory reset command and now the controller is in an infinate loop... it says the partner has taken ownership and then goes back to the autoboot command.

This keeps happening? If I reset it to factory, how does it know about its partner?

The filer has HA connection to its partner, so though it is reverted to factory defaults, it will automatically detect the partner. No worries

it_7
6,889 Views

OK all back up and running. One problem though, the disks are still all owned by controller 1. So despite doing a factory reset, it hasn't actually reset it as it if was in an uncofigured state?????

I'll give the maint. mode a go.

tirtha
6,319 Views

In maintainence mode u do a "disk remove_onership" for all and then you can do a fresh disk assign if you are nothing to do with data.

it_7
6,319 Views

Yeah figured that one out.

How do I assign one as a spare for an aggregate? I've left one disk unowned ready to make it a spare.

I've looked through the (long) list of commands in the CLI but nothing is obvious!

tirtha
6,319 Views
How do I assign one as a spare for an aggregate? 

This is not clear to me. can you say exactly what u want to do?.

when you assign the unowned disk to a controller, it will automatically come up in the spare pool.

it_7
5,829 Views

After running the remove_owenership option and re-assing the disks to the relevant controllers, it has placed them all as data disks (with the exception of parity and dparity).

So now the aggregate is complaining that no spares available.

aborzenkov
5,829 Views

remove_ownership does not destroy disk content (aggregate). You have to explicitly destroy aggregate when disks are assigned to filer. If you have single aggregate and assigned all available disks to it, the only option it to reformat disks (option 4a in special boot menu). Beware, it will wipe out all disks content and you will need to reinstall DataONTAP later again.

eric_barlier
6,889 Views

I m pretty sure you can assign disk between cluster partners on the fly. I ve done it not too long ago.

Eric

it_7
9,619 Views

OK, how do I get into maintenance mode?

emollonb2s
6,889 Views

You have to reboot the filer and press the control +D to enter manteinance mode, just read on the promt when start, then the filer will ask you if you want to enter manteinance mode. it's easy.

Public