2013-02-14 10:42 PM
We've got a customer who complains one of his users can't reach a series of vFilers anymore.
After searching for a possible cause, we stripped all memberships of Active Directory groups, for a particular server.
We added him to 1 Ad group. In this situation it is possible to reach the vfilers.
We suspect that the Kerberos Token Size of this particular user is rather big due to extensive group nesting.
Is there a command I can issue to see the max token size setting for vFiler?
I read in
NetApp Knowledgebase - What is maximum Kerberos token size that Data ONTAP 7G can process?
that max token size is 12K but can be set to max 64K, the problem is I can't find where to set this.
Mentioned filers are still running on OnTap 7 versions so we are in the process of starting an upgrade project, but that will take a while.
2013-02-18 04:38 AM
Got a response from IBM support, thought I'd share
"I'm not aware of any settings which can be done on the filer.
I assume you refer to the kb-id3012217 for the limits.
This confirms the filer can handle up to 64K in 7-mode, so any changes
from default 12k on client needs to be done on client and not on the filer"
2013-04-25 08:13 AM
We are encountering this problem with our ClusterMode NetApp environment. The issue doesn't seem to be limited to tokens that are very big or very small but rather "OF SPECIFIC SIZES". Adding or removing groups can reset the token resolving the issue OR recreating the issue for an individual client.
So far the only way to detect the problem is by the user complaining about lack of connectivity. There are no logs that can be monitored to identify which users are encountering the issue. If fact we had to run a workstation wireshark trace to actually capture the kerberos failure to identify the problem. This is a user specific problem so if follows the user around from machine to machine. No registry or GPO fix can be applied at the AD group level.
Considering we have thousands of clients attempting to attach to this NAS, domain membership chainging mulitiple times a day and no way to forever fix the problem this bug is preventing us from moving forward with additional migrations.