It’s not possible to reduce aggregate; you would need to evacuate data and recreate aggregate in new configuration. It may be possible to create temporary aggregate using available spares, move current root, reconfigure root partitions and move root back. This is disruptive procedure and requires fair bit of experience. You may consider asking your supplier if they can implement this expansion. You will still be left with active/passive configuration, with new controller working as pure hot spare. You would need additional drives to make it actually serve data.
... View more
@akiendl wrote:
yes this is supported. This should be in the documentation here:
http://docs.netapp.com/platstor/topic/com.netapp.doc.hw-upgrade-controller/home.html?cp=8_4
Thanks for the link, but just wanted to make sure we are speaking about the same things.
After head swap new controller most likely has different NVRAM size so NVRAM mirroring is disabled. Actually for 4-node cluster with ARL head swap MC FC FAQ explicitly warns "NVRAM mirroring between the sites must be disabled for the entire process" which logically also applies to 2-node cluster. And it goes on to add "Planned switchover is not possible while the upgrade is in process".
But for non-disruptive 2-node head swap we must do planned switchover (and switchback). It is clear that initial switchover with old heads is possible. But the main question is - will switchback and subsequent switchover in opposite direction with one old and one new head be possible too? If yes, is it done by standard commands or some diagnostic mode magic is necessary?
Documentation rather sidesteps it unfortunately ...
... View more
Are there any concerns? My understanding is that it basically should be same as motherboard replacement which is supported. Of course it is necessary to adjust ports, licenses etc for new head, but that is not different to standard ARL head swap procedure.
I could not find any definitive answer whether this is "officially" supported or not.
TIA
P.S. was too fast. The obvious problem will be NVRAM mirroring. No chance to do it non-disruptively?
... View more
According to HWU this is supported (up to 24 drives) but factory shipment rules may differ or shelf was ordered separately from controller. Also I'm not sure what happens when internal and external drives are different. If all drives are the same, youo may consider reinstalling it to get full 24 drives ADP; check HWU whether your configuration is supported.
... View more
Which is exactly why having thick LUN on thin volume is not recommended - it looks too confusing. Logical volume space is indeed near to full due *logical* space reservation on volume level, but only actually consumed space is ever allocated on physical (aggregate) level. Besides, as soon as you start using deduplication and compression it is impossible to tell exactly “how much space is left” even theoretically.
... View more
You do not say what ONTAP version you have; in current ONTAP LUN space reservation is honored only on thick provisioned volume (i.e. when space reservation is enabled for volume itself) which matches what you see.
... View more
I failed to parse your message, sorry. With ADP physical disk ownership is irrelevant. You are using partitions, not disks. Show your disk configuration and exact commands you used to “reclaim space”.
... View more
@Tas wrote:
Of course, you need at a minimum three discs per node for aggr0.
FAS26xx is using ADP by default, there are no dedicated root disks.
... View more
Normally SP is configured from within ONTAP - this ensures that if motherboard is replaced it will receive correct settings again automatically. If you need LAN access to SP before ONTAP is deployed you can do it from loader prompt with “sp config” but please understand that it is *not* replacement for proper configuration on ONTAP level.
... View more
@D_BEREZENKO wrote: I do not see a reason why not just pull disks out one by one and pull them back one by one, one drive at a time? I’ve done it few times with two disks, it worked for me without raid reconstruction or any other problem.
I do not see how it can work unless a) you do not have spares available and b) there is no write activity on aggregate. Yes, NetApp can assimilate "lost" drive under some conditions but in this case spare would likely kick in before you can put drive back.
In any case - this is not how it is intended and documented to work so it is definietly not something that anyone should recommend.
@D_BEREZENKO wrote: WAFL can perfectly work without a drive for short period of time without and without raid reconstruction after you put it back.
Define "short". Yes, WAFL can work for some time even under write load as disk background firmware update demonstrates, but if you look carefully it still performs reconstruction, just partial, not full. And it knows it was going to remove drive temporary. Here it is surprise removal which is equivalent to drive failure.
Anyway, returning to original post
@Alfs29 wrote: Just offline lun, vol, aggr
You cannot offline aggregate containing volumes in normal mode so usual disclaimer about diag mode applies. And in any case - offline aggregate means loss of access to data at which point you can just as well simply halt filer. Otherwise yes, physical disk location does not matter - you can rearrange disks to your heart content when filer is not up.
It is always better to organize half an hour of planned downtime than risk half a day of unplanned downtime. And in your case I do not even understand the purpose of this exercise.
... View more
@andris wrote:
Each cluster would monitor the bridges and switches.
Is it configured automatically during MCC setup or is it necessary to manually add them?
... View more
1. It is possible to have “cold spare”, at the end it depends on how valuable Data is and how quick you can notice problem and replace failed disk. Your raid group will be unprotected (less protected) until rebuild competed; any error during this time means potential data loss. 2. The problem is not disk size (large disk can be used to spare smaller one) but disk type. By default FC-AL and ATA disks cannot be mixed in one aggregate. 3. Your unused shelf is AT which means it cannot be used as spare for small FC-AL disks. I think there was an option to allow it; you need to check documentation. Think twice before doing it though if there is any serious load on these disks.
... View more
Failed aggregate is foreign, so I suspect it is ghost resulting from using second hand disk, probably as replacement. They are often sold “as is”, without zeroing them first, so NetApp detects that it was part of aggregate. Anyway, if your VMware admins do not scream loud yet, there was probably nothing important there even if i’m wrong 🙂
... View more
@jamcguire wrote:
I ultimately need to remove ownership on the disks.
According to output you provided you have single unowned disk, so I'm not sure I understand your question.
... View more
@G30rg3 wrote:
I have my pub, priv and ca chain in pem format and just need some documentation to point me to the right direction.
Have you tried "security certificate install" command?
... View more
You can check supported configurations in HWU (both cluster interconnect and ADP). Regarding cluster interconnect, there is no technical restrictions, you can use 1GbE ports, but this may not be supported.
... View more