Does anyone have some suggestions on how best to configure multi-protocol file access for a new NetApp environment that is replacing an EMC Celerra? My client is moving about 24 TB from a Celerra to a NetApp V-series. There is some dedicated NFS data and CIFS data but the majority of data is accessed by both CIFS and NFS clients.
With the Celerra, we used the "NATIVE" access-checking policy for multi-protocol access. "NATIVE" mode means that the Celerra handles two separate sets of permissions for the same files and directories--one set for NTFS and one for Unix. NFS permissions have no affect on CIFS clients and vice versa.
We're now trying to determine the best way to configure the NetApp for this multiprotocol environment. Since ONTAP can't mimic the Celerra NATIVE behavior, our only options seem to be to 1) use NTFS security style and map users or 2) go with "mixed" security style. We'd like to avoid mixed security on vols/qtrees since NetApp has recommended against it in recent years and it's an administrative headache.
However, mapping users is a tremendous undertaking as well. As I see it, we're talking about looking at every unix client and application, understanding what user needs to access what NFS data, configuring Unix-to-AD mapping of some sort, and setting appropriate NTFS ACLs to accommodate that access.
We do have LDAP setup for authenticating Unix clients against AD using Vintela Authentication Services, as described in Netapp KB9264. With LDAP, we should be able to simply create the Unix users (e.g. "oracle") in AD and use that rather than relying on a complex usermap.cfg file. However, we'll still need to set file-level ACLs on all the necessary files for every user.
The only thing I see that may simplify this is if we can somehow harness the Celerra's user and group mapping files for a scripted deployment. We can download those from the Celerra management interface. Each file has entries similar to this:
S-1-5-15-28123bcf-3f2354db-6e564c54-1041:*:73215:32768:user abc7578 from domain _history_sid_range_:/usr/S-1-5-15-28123bcf-3f2354db-6e564c54-1041:/bin/sh
If the user names are the same, mapping will work without a usermap.cfg entry. Are they the same? Mixed mode is quite a headache and if both windows and unix update permissions it really is not fun to manage.
As I see it, we're talking about looking at every unix client and application, understanding what user needs to access what NFS data,
I am not sure I quite understand it. Every file on NFS share has well defined permissions which do not depend on who is accessing this file. These permissions can be fetched and converted alongside with copying file itself. It is relatively straightforward to generate Windows ACLs from Unix mode bits once user mapping issue is resolved.
The files you downloaded are from internal usermapper service. Every user on Celerra must have Unix UID and GID. For CIFS users mapping can either be configured manually, or as a last resort usermapper generates them automatically. So this database likely contains random UID for every Windows user ever accessing your Celerra; but it does not contain anything directly related to file permissions. It could be useful to find out which Windows user owns file in case file owner comes from CIFS side.
The core problem we're trying to address is how to map Unix permission mode bits to NTFS ACLs. The Celerra's NATIVE access-checking policy keeps separate Unix and NTFS permissions, so even if there is a Unix->Windows account mapping in place and NTFS ACLs on a file structure, Unix users accessing NFS exports use Unix security instead.
On a NetApp qtree with NTFS security, on the other hand, there are no Unix permissions--only NTFS ACLs. Therefore, we need to replicate the old Unix permissions as equivalent NTFS ACLs.
For example, we have a user oracle@unix with data on Celerra accessed via NFS. The same data is also accessed by AD users via CIFS. The Celerra's NATIVE access-checking policy maintains 1) Unix mode bits for oracle's UID/GID and 2) NTFS ACLs for AD users. As I understand it, even if there is an "oracle" user in AD, the AD account is not used for file access-checking via NFS when oracle@unix accesses the data.
Here are a few additional points of clarification on our environment:
We want to avoid ONTAP's mixed security style if at all possible.
We are using NTFS security style on all Volumes/Qtrees with mixed NFS/CIFS data.
We have LDAP configured so Unix users either are, or can be, mapped to AD users.
This is the migration process we've settled on:
Set volume/qtree security style to Unix
Rsync all data from Celerra export to corresponding NetApp export via NFS (rsync -a to preserve all permissions, times, etc.)
Set volume/qtree security style to NTFS
Perform incremental update, including ACL updates on ALL files, using Windows migration tool and CIFS.
Following step 2 above, Unix hosts see normal Unix permissions on the destination (NetApp) NFS export. These permissions mirror the original copy on the source as you would expect (e.g "rwxr-x---".
Following step 3 above, there is no change in the Unix permissions
Following step 4 above, Unix hosts see either NO permissions ("---------") or ALL permissions ("rwxrwxrwx") depending on whether the Unix user is not mapped or is mapped to an AD account. Neither of these are desirable, as we'd like the Unix user to see something more closely approximating the original Unix permissions.
So, if it's relatively straightforward to generate Windows ACLs from Unix mode bits, then that's what we need.
To wrap up this discussion, here's what we ended up doing.
Setup administrator to root mapping in usermap.cfg
Set Qtree security style to 'mixed'.
Rsync all data from Celerra to NetApp (rsync -a). This created Unix file-level permissions on all files.
For files we knew required NTFS ACLs, we created them with a program called Secure Copy by ScriptLogic (robocopy, richcopy, etc. would also work). This worked especially well for files such as home directories which had a clear delineation between Windows users and Unix users.
Created a security trace for the default Unix user (e.g. sectrace -add -unixuser pcuser -a). For each file with Unix permissions (most of them), this revealed everyone who did not have a Windows-to-Unix user mapping through LDAP or usermap.cfg. Note that the user traced must be the same user set in the option wafl.default_unix_user (default is pcuser) for this to work.
Addressed each unmapped Windows user on a case-by-case basis. In some cases, we mapped the user to root. In other cases, to an existing service account such as oracle or to one created for the purpose.
A few additional notes:
We reversed our original plan which was to go with NTFS permissions by default and create ACLs for mapped Unix users. This was due to input from our Unix administrators who are the most likely to respond to permissions-related service issues on the filesystems in question. There was no technical reason for the deviation.
Our LDAP environment already provided mappings between Windows and Unix users for most actual user accounts. Those accounts that weren't mapped were mostly service accounts for programs to execute as. In a few cases, we turned up programs that were using an general user accounts to run.
The filesystems that were the most complicated in terms of Windows and Unix users accessing the same files were mostly those used by these service accounts.