You confuse share level and filesystem level permissions. To be able to access server\share\folder<file:/// server\share\folder> user must first have permissions to connect to server\share<file:/// server\share>. If (s)he is not allowed to do it, there will be error right away and no way to access any file and/or folder inside this share at all. Your initial question was about share level access. And this is controlled by share ACL. Once user connects to share (assuming necessary permissions are granted) access to individual file(s) and/or folder(s) in this share is controlled by file ACL. For Unix qtree these ACLs are reduced to standard Unix file owner, group and mode bits. For access check Windows user is mapped to Unix user and access is verified using standard Unix rules. If all your users are mapped to root, then every user has access to every file (on Unix qtree). Please read TR-3490 about multiprotocol access to NetApp.
... View more
Okay, i´ve createt /etc/sshd/root/.ssh/authorized_keys and /etc/sshd/root/.ssh/authorized_keys2 directory It is not a directory, it is a file. How to setup public key authentication is described in details in System Administration Guide as well as in couple of KBs.
... View more
I cannot reproduce it, at least in simple test. Running simulator in WORKGROUP mode and creating a share limited to specific group correctly denies access to a user not in this group. simsim> qtree status Volume Tree Style Oplocks Status
... View more
For right now though when I create a cifs share of a unix qtree it isn't enforcing the share permissions at all. Could you show output of "cifs shares share_name" where share_name is share for which permissions are not enforced? Is your system part of domain?
... View more
/etc/sshd/root/.ssh is not file - it is directory, just like on any Unix system. You put authorized keys into /etc/sshd/root/.ssh/authorized_keys (or may be authorized_keys2 for SSH-2 as pointed above, do not remember right now). Read OpenSSH manual how authorized_keys is used.
... View more
There is no inherent differences between volumes; it all boils down to how you use them. Because SAN and NAS worlds differ in space and snapshot management, it is usually not recommended to mix SAN and NAS data on the same volume. But there is nothing that technically prevents it. Indeed, it is even possible to allow access to LUN as normal files via NFS for purposes of backup or data mining.
... View more
It is not possible to shrink existing aggregate. You will have to backup data, destroy aggregate, recreate smaller one and restore data. There is trick if aggregate is RAID_DP - convert it to RAID4 - this will remove one parity disk from each raid group and make it available. Sometimes this helps in the short term. But I would not consider it long term solution. Do you really have aggregate filled with data or is it just space allocated for flexible volumes? You can easily reduce size of flexible volumes at any time.
... View more
unless you are going for thin provisioning. The problem here is exactly because OP does use thin provisioning to the extreme (likely without realizing it). With traditional thick provisioning NetApp had long blocked snapshot creation thus preventing out of space condition.
... View more
If you use SnapDrive, recent versions support space reclamation on Windows – i.e. unused space on NTFS file system is returned back to NetApp to free up space on volume. Another possibility is deduplication, which could reduce physical space consumption.
... View more
You do not type commands after rsh'ing into filer. rsh is used to execute command which is part of command line, not to make interactive login. rsh filer "put your command here", e.g. "rsh filer date"
... View more
I became curious and got a got a look at what Centrify does. Behaviour you describe appears consistent and by design. Reading description for zones: - A Zone can consist of any mixture of DirectControl-managed UNIX, Linux or Mac computers ... - A single user or group ... cannot log in to computers in any Zone to which they are not a member So zone looks like privileges separation boundary and server (which NetApp in this case is) can belong to one zone only. So only users in the same zone can access it.
... View more
Well … you turned off space reservation everywhere and filled volume to the limits which means, there is nothing Data ONTAP can do to protect you from running out of space. You are solely responsible for monitoring available space and taking steps when it becomes low. Please read TR-3483 which explains in details how space for LUN is managed on NetApp. In short, you must ensure that sum of LUN size and possible snapshot size during retention period does not exceed volume size. It does in your case. You have to decide what is more important for you – squeezing last byte out of NetApp or ensuring continues data availability. Personally I prefer the latter ☺. How full is file system in Windows? If there appears much free space you could try to run space reclamation on Windows, but there were some bugs resulting in data corruption, so I’d open support case to verify that you do run into it. For now the only way is to remove more snapshots, but you probably need to increase volume size anyway.
... View more
Volume can't be shared via iSCSI; only LUN (which is effectively file on a volume) can. Please show output of commands on filer: df -h vol_nfs_backup_01 df -r vol_nfs_backup_01 vol options vol_nfs_backup_01 lun show -v
... View more
I am not sure how exactly PM performs restore, but this KB could be relevant: https://kb.netapp.com/support/index?page=content&id=2012731 Or even better this one: https://kb.netapp.com/support/index?page=content&id=2012496
... View more
Clone split is background process. So you need downtime only until clone is created. 1. Shutdown applications 2. Remove shares 3. Clone volume 4. Recreate shares pointing to new volumes 5. Start applications Now you have time to clean up new cloned volume and initiate clone split. Data continues to be available while splitting is in progress.
... View more
Sorry for delay. No, it is not about multiple failures. Please understand - as soon as you lost access to one site you have no way to know what's going on there. Whether this was storage, cluster nodes, communication line failure; whether applications are still running, how much data had been processed. Even if Microsoft cluster will hopefully stop nodes in minority, split brain detection is not instantaneous; and today even several seconds is quite a long time for processing of a lot of requests. So you have to assume that the described situation happens every time split brain occurs. There is no "multiple" and "single" failure. There is just failure what would you like to see in the scenario where you experience multiple sequential failures? Honestly? Nothing There is no way to automatically respond to site failure without risking data corruption. So what I'd expect in this case - option to enable or disable automatic behaviour - this option should default to disable - documentation should explain possible consequences of enabling it Look as example on EMC Cluster Enabler which does similar thing for SRDF or MV. This is exactly what they do.
... View more
Oh, I dd not know you can do it. It is not really documented. Thank you! (Of course it depends on whether other shares are allowed to go offline, even briefly).
... View more
Filerview is included in all NetApp controllers Rumors are it will disappear in 8.1 (technically, not license ) so it is probably the right time to start with System Manager if GUI is required.
... View more