@TMACMD wrote:
You may not be looking at a complete picture
So where should I look? What configuration enables portmapper globally, on any interface? Arguably this is security issue. How to stop it?
On each node there are cluster and node management interfaces and one SVM on each with one LIF with "mgmt" policy and one LIF with "data" policy. Port 111 is explicitly opened for LIF with "data" policy as it should be.
Migrating cluster management interface between nodes does not change anything.
... View more
Does it work for anyone? Linux tftpd sends reply which is rejected by ONTAP with "port unreachable" which causes tftpd to error out.
17:53:04.562521 IP cdot01.16653 > linux.tftp: 72 WRQ "cdot.8hour.2019-10-10.18_15_05.7z" octet tsize 48466041 rollover 0
17:53:04.565341 IP linux.49124 > cdot01.16653: UDP, length 28
17:53:04.565514 IP cdot01 > linux: ICMP ff-cdot01-co udp port 16653 unreachable, length 36
cdot01 is node management interface.
vserver lif role firewall-policy
------- --------------- --------- ---------------
cdot cdot01_mgmt1 node-mgmt mgmt
... View more
Two node cluster recently installed; it came with 9.5P1 and was later updated to 9.5P6. Starting with 9.4 portmapper (port 111) is normally blocked by mgmt firewall policy. To my surprise I found that on one node port 111 is globally allowed, while on another node it is only allowed on LIFs with data firewall policy:
ff-cdot01% sudo ipfw list | grep 111
00001 allow log ip from any to any dst-port 111 in
00001 allow log ip from any 111 to any out
00105 allow log ip4 from any to 10.197.2.2 dst-port 111 in
00105 allow log ip4 from any 111 10.197.2.2 to any out
ff-cdot01%
ff-cdot02% sudo ipfw list | grep 111
00102 allow log ip4 from any to 10.197.2.5 dst-port 111 in
00102 allow log ip4 from any 111 10.197.2.5 to any out
ff-cdot02%
Could somebody explain how it could happen? How can I "fix" it to match normal default 9.5 behavior?
And more importantly - at this point I am unsure what else can differ between two nodes. Is there any way to verify configuration consistency?
... View more
In 8.2 you need to designate on of data SVM as authentication tunnel.
https://library.netapp.com/ecmdocs/ECMP1610202/html/security/login/domain-tunnel/create.html
... View more
It shoud normally be non-disruptive. But as general recommendations, any work on production equipment should happen during scheduled maintenance window where accidental downtime can be tolerated.
... View more
@eliebeskint wrote:
how can we fix it ?
If disks really failed - you cannot (without data loss). It is impossible to recover raid group with 4 failed disks, so the only option is to recreate aggregate and restore data from backup.
To determine whether disks are really failed you should open support case to analyze your system.
... View more
@FMS wrote:
So, in a 6 node cluster, an aggregte created on HA1 will still be available if HA1 goes down
Is this a question (there are no question marks in your post)? Then the answer is “no”. If this is a statement, it is incorrect, sorry.
... View more
One should never perform giveback without having console access to controller which was taken over - especially in the case. If “cf status” does not indicate partner is “ready for giveback”, it means either partner did not boot or there is some communication issue. Blindly performing giveback in this state can easily result in outage and data loss.
Console connection (either directly or via RLM/SP/BMC) is really a must when doing any maintenance in NetApp.
... View more
What do you see on console of the controller in question? What “cf status” on good controller says?
The only hardware specific documentation is related to parts replacement; everything else is the same so just use Data ONTAP manuals for your version.
... View more
If you have spare disks which can be used to install Data ONTAP on (or separate root aggregate to overwrite) - yes, you can then access your data on another existing aggregate with all snapshots. This is one of great 7-Mode features that C-Mode lacks.
This is high risk activity, it is easy to accidentally wipe out data, so professional service does not sound like bad idea.
... View more
@alessice wrote:
"storage disk fail" and "storage disk replace" that almost doing the same things
No, they do not (at least they did not in the past). Fail actually fails disk and its content must be rebuilt from other disks in RAID. group. Replace copies content of disk to another disk. It means less load, it goes faster and you do not lose redundancy during this process.
Spare or failed disks can simply be pulled out (for rotational disks leave in enclosure for half a minute to spin down before finally removing them). You may need to assign replacement disk manually, it depends on your current settings.
... View more
@paul_stejskal wrote:
It almost looks like a partial cluster shell.
Funny that nobody seems to pay any attention to “node is not fully operational” message shown in one of the first posts.
As I understand, this system was not in production anyway, at which point the simplest solution is to reinitialize it from scratch.
... View more
@LSOTECHUSER wrote:
Is this acceptable?
No. Even if it may appear to work on the same node (which I doubt) NFS LIF may migrate to another node at any time which will result in duplicated IP. Also to avoid dsruption during node failure (or planned reboot) for iSCSI you need at least one second LIF on partner node which must have different IP anyway.
So at the minumum you need 3 IPs.
... View more
You can add partner to interface online, there is no need to restart filer. If you want to verify /etc/rc is correct, you need to schedule maintenance window anyway.
... View more
@SpindleNinja wrote:
hwu needs to be corrected then. It looks like it’s showing adp for 24 drives on the 2720.
Which is correct as long as there are no internal drives. Moreover, HWU seems to be smart enough to not even show 12 drives version for drives that do not fit into controller enclosure.
What would be useful is comment whether shown configuration applies to internal or external drives.
... View more
@andris wrote:
To recap:
Entry-level FAS systems (with HDDs) will only perform ADP on the internal/embedded drives
What about entry level systems without internal disks? Will they use full disks for root during initialization?
Oops, sorry, missed 3.
... View more