Name mapping failure - but from where?


Since enabling auditing on our SVMs, we have received a slew of name mapping errors (secd.nfsAuth.noNameMap).  We have intentionally not configured a default Windows user.  Because we want to and need to track down who is actually connecting to the resource and correct the NFS/CIFS client to either

  • use the appropriate account,
  • adjust the UID on the client system,
  • create a new appropriate name mapping,
  • remove the mount from mnttab

In any event, configuring a default Windows user is contrary to the achieving the purpose of auditing access, and masks existing mis-configurations in our environment.


Now, we do email alerts whenever there is a name mapping failure.  Here is a KB detailing the alert messages we receive, the causes, and the corrective actions:


These are quaint, in the sense that we are made aware of a problem, and in some cases we are able to resolve that problem.  They vary in length and content depending upon where the failure actually breaks down, but essentially we've worked the problem back to a few where we just have a UID numbers presented in the alert.


This is the [redacted] text of the messages that we're working on remedying:

Message: secd.nfsAuth.noNameMap: vserver (<svm>) Cannot map UNIX name to CIFS name. Error: Get user credentials procedure failed

  [     5] Mapping an unknown UID to default windows user

  [     5] Unable to map '487'. No default Windows user defined.

**[     5] FAILURE: Name mapping for UNIX user '487' failed. No mapping found


As our *nix environment (previous to this) was rolled out in pieces and not at all centralized, the same user account on different machines can have different UID Numbers UID 487 can exist on multiple machines - and it can be different users on each machine.  So, what we need is a means of identifying whence this connection originated?  In retrospect, it seems plainly obvious that this is crucial information that has been omitted from the alert; moreover, as this information must already be known (as we can configure per-host name mappings), this information is presumably available somewhere, if not directly in these alerts. 


My question:

How can we track down the source IP of the failed name mappings - short of runing a tcpdump?  Is there a level of verbosity that I can turn on?  Is there another event that I should be looking for?  Any assistance is appreciated.





you can create a security trace filter for the unix used id



Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

View solution in original post





you can create a security trace filter for the unix used id



Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK


That is perfect.  A ticket we opened with NetApp support was only able to provide us with the following recommendations:

  • Submit a feature enhancement request to our Account Representative to include this information in the event/alert, and/or
  • Review our secd autosupport logs in ActiveIQ and track the Client IP addresses provided there (I knew they were kept somewhere).

This is a great option, while we can't use this method proactively, we can at least create the filter after the first incident and track from there.  Thank you!