Since enabling auditing on our SVMs, we have received a slew of name mapping errors (secd.nfsAuth.noNameMap). We have intentionally not configured a default Windows user. Because we want to and need to track down who is actually connecting to the resource and correct the NFS/CIFS client to either
- use the appropriate account,
- adjust the UID on the client system,
- create a new appropriate name mapping,
- remove the mount from mnttab
In any event, configuring a default Windows user is contrary to the achieving the purpose of auditing access, and masks existing mis-configurations in our environment.
Now, we do email alerts whenever there is a name mapping failure. Here is a KB detailing the alert messages we receive, the causes, and the corrective actions: https://kb.netapp.com/app/answers/answer_view/a_id/1005658/~/event-message%3A-secd.nfsauth.nonamemap-
These are quaint, in the sense that we are made aware of a problem, and in some cases we are able to resolve that problem. They vary in length and content depending upon where the failure actually breaks down, but essentially we've worked the problem back to a few where we just have a UID numbers presented in the alert.
This is the [redacted] text of the messages that we're working on remedying:
Message: secd.nfsAuth.noNameMap: vserver (<svm>) Cannot map UNIX name to CIFS name. Error: Get user credentials procedure failed
[ 5] Mapping an unknown UID to default windows user
[ 5] Unable to map '487'. No default Windows user defined.
**[ 5] FAILURE: Name mapping for UNIX user '487' failed. No mapping found
As our *nix environment (previous to this) was rolled out in pieces and not at all centralized, the same user account on different machines can have different UID Numbers UID 487 can exist on multiple machines - and it can be different users on each machine. So, what we need is a means of identifying whence this connection originated? In retrospect, it seems plainly obvious that this is crucial information that has been omitted from the alert; moreover, as this information must already be known (as we can configure per-host name mappings), this information is presumably available somewhere, if not directly in these alerts.
How can we track down the source IP of the failed name mappings - short of runing a tcpdump? Is there a level of verbosity that I can turn on? Is there another event that I should be looking for? Any assistance is appreciated.