It is not possible using cluster and SyncMirror. Cluster consists of exactly two nodes and in any case mirrored aggregate consistes of exactly two plexes. What may be possible is to use SnapMirror Sync to replicate data to the third filer. In this case failover/failback will not be automatic (at least, from NetApp side) and requires careful planning.
... View more
This really depends. If possible, it will perform incremental resync; the base is common aggregate snapshot on both plexes. If this is not possible, it will complaint and you will have to recreate mirror from scratch. This is described in Active/Active configuration guide.
... View more
if the permissions on the fstab are set to "users,rw", doesn`t that mean that whoever mounts the media, has ownership of the files mounted? No. See "mount" manual on Linux what it actually means. even root cannot create files within the NFS mount point By default root is not privileged on NFS file system; you have to explicitly grant root permissions by using "anon" or "root=..." option when exporting.
... View more
The permission is always set to root instead of the user I use to mount the NFS share. Linux does not fake permissions for NFS; you always see actual file permissions that are stored in filer.
... View more
Well, I would rather consider it a bug if I wanted to have 100 fractional reserve and it were silently ignored. Volume guarantee none means I do not want to reserve full volume size upfront. But it does not mean that I do not want to ensure my LUN is still writable – it is just that now this space is reserved from parent aggregate and thus is shared among LUNs from multiple volumes. If I truly wanted to ignore possible aggregate space overflow, I’d just set fractional_reserve=0 in the first place. That it was not possible was a bug (or, rather, misfeature) that was fixed.
... View more
Well, NetApp has many features; sometimes combining them may have unforeseen consequences. I just tested in simulator. As soon as A-SIS is enabled for a volume, it reserves space for a LUN in this volume according to fractional_reserve setting. As I believe was already mentioned, to be able to set fractional_reserve=0 and guarantee=none you need 7.3.4. I can understand why it behaves this way; if you want official answer, open a case with support and ask them to explain this behavior.
... View more
OK so de-duplication is enabled for the volume; I think it explains it. First, it works by creating temporary snapshot during dedup run; second, de-duplication by definition cannot predict how much free space will be available, so it could be treated like snapshot w.r.t. to LUN reservation (anyone to comment here?) You could set fractional reserve to 0 to remove reservation; it is up to you to decide whether you actually want it.
... View more
Yes, I was about to mention it ☺ We got into such situation at least once. In this case destination filer fails to establish guarantees and your volume remains with guarantee none (or, better with volme(disabled)). This by itself does not cause any harm, but of course one has to be aware of it. For this reason I prefer to set guarantee to volume now, when 7.3 made this possible.
... View more
We continue to see 50% (yes 50%) or more gains after some reallocate runs. Mainly heavy sequental reads and mainly on large database files that get updated frequently... I have seen 200% on backup (dump) of volume with database (50MB/s => 200 MB/s). This was test run (dump to null); real life is limited by other factors still it cut down backup time in half. We have seen reallocate measure results over 20, so 10 must not be the max.. Max is 32. I have seen over 20 on the system I mentioned as well. reallocate measure is reporting a 6 with a hotspot 19 Could you show exact message and where do you get it? I do not remember having seen it.
... View more
User capabilities are defined by roles. By default group Administrator is given “admin” role that has following priviliges: Name: Administrators Info: Members can fully administer the filer Rid: 544 Roles: admin Name: admin Info: Allowed Capabilities: login-,cli-,api-,security- So you could create another role and assign it to group Administrators instead of default role “admin”; if you will not include login-* in this group, users in this group will not be able to log into filer using any available protocol. Actually, it is quite possible that for your purpose you can remove all capabilities from this group, because it sounds like you only need it as placeholder for file ACLs. Please check TR-3358 for more in-depth description of role based access control. This is not tested. One thing I miss – how RBAC plays together with Windows management API.
... View more
You have to be careful regarding number of snapshots. 255 is actually number of times block can be shared. So when you use snapshots and deduplication this effectively limits either deduplication ratio or number of snapshots. I wonder what happens if block sharing limit is reached and you try to create snapshot. Hopefully will find time to test it.
... View more
If you have an active/active configuration and do Halt, partner will automatically takeover. If you intend to shut down both partners, use "Halt but dont get taken over" (or whatever it is called in GUI). Takeover state is persistent, meaning that when you switch filer(s) back on, they come up in the same takeover state. Usually shutdown is pretty fast; to be on safe side, it is better to open console and wait for CFE prompt before physically powering off. If you open console anyway, you do not need GUI - just do "halt -f".
... View more
You can link AD groups to the filer for administration once it is joined to the domain, however for these users to be able to authenticate, they need to exist on the filers local user database (although all authentication is done against AD). No, that's not correct. You can add domain group to local group and any user belonging to domain group will have whatever rights local group has. There is no need to create local accounts for it.
... View more
I don't want to give rights to fully administor shares on the filers, just be able to view and close open files on the existing shares. Could you explain or give reference to how to do it on Windows? It may give hints how to implement the same on filer. At this point I am not aware of any way to it on Windows either.
... View more
OK, I see; I did not think about additional load. As for server-NetApp dependency: I do not think about switching on. But you must ensure NetApp is not switched off as long as server are not yet completely shut down. I am just curious – how do you ensure this?
... View more
I never understood why would anyone wish to shutdown NetApp on power failure in the first place. This raises all sort of issues to synchronize NetApp shutdown with server shutdown and this becomes really unmanageable as number of servers grow. Back to your question – if power was actually turned off, NetApp would just boot automatically when power is turned back on. The only problem is if NetApp was shut down and sits at CFE prompt and power was not switched off. But then it is exactly the same problem you face with any other server connected to UPS – server was shutdown but power returned before UPS had actually switched off. How do you manage this with normal server?
... View more
Limit for DS4243 is 10 shelves in a stack (240 drives). So two quad SAS HBA will max out FAS3170 (840 drives max) leaving you one slot for 10G and another one for PAM II. Unless you plan to build metrocluster I'd use SAS shelves exclusively.
... View more
some people is recommending that ALUA should only be enable if you are using Fiber Channel protocol (but it is not our case) It is not recommending. ALUA applies only to FCP so if you do not use it you do not need to use ALUA (nor should you even be able to set it for iSCSI igroup).
... View more
No, that’s incorrect. If one switch fails, LUN access is retained via partner connection. This is the same as was on FAS270C. You have to set cfmode to single_image, but this is effectively the only cfmode supported today. And, BTW, FAS270C dual_fabric was never active/active ☺
... View more
I think it is the nodev mount options. No. The filesystem that works is mounted using NFSv3; the filesystem that does not work is mounted using NFSv4. NFSv4 mandates UTF-8 for all protocol strings which means, both server and client have to translate file names from local character set to UTF-8. Unfortunately I could not find any clear (actually, any at all) explanation how vol lang option on NetApp interacts with NFSv4 (anyone from NetApp to chime in here? Please ... ) but I believe this is your problem - in your case vol lang is set to plain en while it should be set to indicate your clients are actually using UTF-8. Beware, changing language after files have been created may result in loss of file access.
... View more