Subscribe

secd.conn.auth.failure:

[ Edited ]

Hi there,

 

We have been getting these event logs.  Been looking at the other posts but not getting anywhere.

 

 

secd.conn.auth.failure: Vserver (xxx_svm1) could not make a connection over the network to server (ip xxx.xxx.180.80, port 445) via interface xxx.x.180.249. Error: Operation timed out.

 

 

This message occurs when the Vserver cannot establish a TCP/UDP connection to or be authenticated by an outside server such as NIS, LSA, LDAP and KDC. Subsequently, some features of the storage system relying on this connection might not function correctly.
 
Ensure that the server being accessed is up and responding to requests. Ensure that there are no networking issues stopping the Vserver from communicating with this server. If the error reported is related to an authentication attempt, ensure that any related configurable user credentials are set correctly.
 

 

 

PInging from the command using the lif that is reported to be the issue resulted  Ok (is alive).  Please advise before we open a case with support.

 

 

 

ping -vserver xxx_svm1 -lif xxx_svm1_lif06 -destination xxx.xxx.180.80

 

 

Also tried this command:

 

nas::> vserver services name-service dns check -vserver xxx_svm1
                              Name Server
Vserver       Name Server     Status       Status Details
------------- --------------- ------------ --------------------------
xxx_svm1      xxx.xxx.180.80  up           Response time (msec): 4
xxx_svm1      xxx.xxx.180.81  up           Response time (msec): 1
2 entries were displayed.

 

 

ONTAP 9.2P1

Re: secd.conn.auth.failure:

hi

 

tell you the true. i suspect they didn't properly fixed:

https://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1041972

 

is it happening only when the system has high load? specific interval? any actual impact?

i'm on 9.1P6 on my and i still get similar ones when i'm hammering the system (in my case high 10K IOPS of CIFS workload generated by robocopy).

 

 

G

Re: secd.conn.auth.failure:

 

Thank you so much for the post.  Exactly what you described.  Its often the same time when we have robocopy jobs running (copy data to the cloud).

 

No impact to work.

 

TT

Re: secd.conn.auth.failure:

Hello, we have same EMS log for some vserver in our cluster. that vserver have lif with home on different nodes of the cluster.

We can't see any correlation with load, but it's curious they started when we upgraded to 9.1P10.

Re: secd.conn.auth.failure:

Hi

 

this warning message could be sometime valid.

if you want to troubleshoot more - i think it's better to open a new thread and provide a bit more info/output about your configuration.

 

Thx

Re: secd.conn.auth.failure:

Hello @GidonMarcus

I've replied here because I thought "hey, it seems I've the same issue".

If further investigation is required I'll open a support case.

Thank you

Lorenzo

Re: secd.conn.auth.failure:

[ Edited ]

Hello, I had a rapid chat with support,

they point me to this bug:

https://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=1079727

Should be fixed in 9.1P11 (9.2P3 for @SVHO)

Re: secd.conn.auth.failure:

FYI we are on 9.1P13 and we still see these alerts come through, even though lif connections are successful and we are not experiencing any problems with cifs.

Re: secd.conn.auth.failure:

Hi There

We occured the same issue on Version 9.1P8. There was no connection issue or any issues related with our CIFS.

After the Update on Version 9.1P3 the EMS Events stopped and everything was fine.

Two days ago I change the DNS Servers (added two new ones, while removing the old ones).
We still do not have any issues with the connection to the DNS-Servers or the CIFS, but the messages are showing up again.

I've planed an Update on Version 9.3P5 in a few days.

I can keep you updated, if the EMS Events keep showing up after the update or if they disapear.

If they disapear the bug may still be present, but only shows if there are changes in the DNS-Settings.

Best Regards

Re: secd.conn.auth.failure:

Hi,

 

Quick update, like