Hi, This has always been one of my major problems with PM. The policies are not described in terms of their ONTap technologies. "Backup and Mirror" ... is that local tape backup and some sort of snapmirror or what? I know now that it means a local snapshot schedule and the Volume SnapMirror, but having to experiment with all of these to actually understand what they mean in terms of NetApp technology has always been a huge impediment to my adopting PM in existing operations. I know it sounds good in data management books, but it would be nice if there were some clearer ties to the real world. I made the same mistake as "asundstrom" here because it is not clear how my practical snapshot/snapmirror world fits into these "IT management" definitions. The link to the article was nice, but even there, he jumps around pretty fast in pointing to where things are and how things are setup.
... View more
Hi, TR-3864 pretty much does answer the necessary questions on the NetApp side. There is really no way to get the current state of the storage system without the ability to list, for example, LUNs, licenses, volumes, host mappings, etc. It seems like the concern here is what the local Windows administrator has access to. This is a personnel issue on the one side and, of course, a configuration issue on the other. Limiting who can start SnapDrive would seem to be a matter of creating the correct Windows administrator groupings and limiting execution of SnapDrive to those groups. The NetApp side does its job by limiting command execution to users in groups with the correct roles and capabilites. The snapdrive user on the NetApp needs to have a password known only to authorized admins (a personnel issue). There's only so much one can do with roles, and for that matter, technology. Limiting access from "random admins" is really beyond what NetApp can do for any organization. S.
... View more
Hi, It looks like the "c" output is not yet documented in the manpage for sysstat. You might want to open up a documentation bug report with NetApp.
... View more
Hi, You are going to need to include the relevant parts of /etc/quotas. This looks like an error in your quotas file. You can find more information in the manpage or the system documentation.
... View more
Hi, Unfortunately the original post got munged here and the error message is longer visible, nor do I remember it at the moment. The setup looks ok, but I don't think you really need the "dns.update" options on. You do have a relatively high rate of errors v. calls listed in dns info output. Since you masked all of the IP output, it is a little difficult to see where the DNS servers are in relation to your filers. You may have bad connections, firewalls, etc, disrupting your dns lookups. You might also want to check the logs on your DNS/DC machines to look for errors. If the secondary server is always "DOWN", you may have connection problems there as well. Can't say much more at the moment since the original error message seems to be gone.
... View more
Hi, Just a couple of things: 1. your "-q" argument should actually be a qtree "-q /vol/repl_OSSV_test/host1_e_ossv", for example. This will be created when you run the snapvault start comand on the filer destination with the same destination qtree argument. 2. The OSSV service doesn't need to be or should not be running during LREP. Hope this helps
... View more
Hi, I can't really tell you much about the Windows NFS implementation. I've never used it. Does "-sec=sys" not work for Windows mounts? Checking mount problems on filers is basically just a matter of running 'options nfs.mountd.trace on' and/or using 'exportfs -c client_IP /vol/your_vol [rw|root|sys|none] ... Again, you may have more success using a qtree below the volume level, mounting the volume itself as root from somewhere, then setting the owner to nobody and the sticky bit (chown nobody:nobody qtree_name, chmod +t qtree_name), than trying to hack this via mount options.
... View more
Hi, You basically just need to use the first export with "-sec=sys", as long as the NetApp can't map the user ID to local information, the files belong to nobody. You don't really supply much information. You can also just set the sticky bits on the toplevel directories and chown them to nobody from a server that you have exported "root" mount access rights to. From the exportfs(5) manpage: anon=uid|name Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path. An anonymous NFS client user is an NFS client user that does not provide valid NFS credentials; a root NFS client user is an NFS client user with a user ID of 0. Data ONTAP determines a user's file access permissions by checking the user's effective user ID against the NFS server's /etc/passwd file. By default, the effective user ID of all anonymous and root NFS client users is 65534. To disable root access by anonymous and root NFS client users, set the anon option to 65535. To grant root user access to all anonymous and root NFS client users, set the anon option to 0.
... View more
I guess it would help if you posted some info (anonymized as necessary). Out from filer(s) CLI: 'dns info' . 'options dns' and the contents from /etc/resolv.conf from the root volume (normally vol0) from your filer(s). Which NAS protocols are you running?
... View more
Virus scanning is a problem for basically all Windows virtualized environments. Whatever you decide to do on the storage system probably isn't really going to fix the problem. It is just going to postpone the problem to some point in the future. Virus scanning vendors do offer central management of schedules, so you normally do have a chance to group and randomize scanning as a bit longer term solution. VMWare also is working in integrated AV solutions, but I couldn't tell you what works best there. That being said, setting 'no_atime_update' is technically unproblematic on the filer side. You just need to know if you have systems that rely on access time being updated. You probably should consult the best practices for VMWare, but we use it on NFS-based datastores for VMWare with no apparent side effects. I couldn't tell you off-hand how much of a load reduction you should expect, but I don't expect it to be that significant. CIFS access probably isn't a terribly large part of your load, but you basically need to know if the file systems need to update access time for some reason before changing the settings. Hope this helps. S.
... View more
You can easily do a simulation of the effects of PAM cards. Do a search on now for PCS (Predictive Cache Statistics). What tool you use is of course, going to be dependent upon your needs and budgets. SSD solutions are going to be very expensive. Some CAD programs do really stupid things like having temp files on networked storage instead of locally or in memory.
... View more
It looks like you are pressing Ctrl+C too early. Do it after you see the "Press CTRL-C for special boot menu" on the console, then pick option "5" for Maintenance Boot. You don't need to see the disks to get to this menu (or it would be pretty useless). You also need to make sure that you are using the correct controller FC port and the correct speed on your shelves (older models didn't support 2Gbit/s speeds, etc). Most of this should be in the documentation. The controller itself should be in module slot B in the controlling disk shelf.
... View more
I'm guessing that since this thread has been dead since June, that he either figured it out or is currently working somewhere else, hehe... Probably nothing to see here...
... View more
I feel your pain, Sal... RBAC is a real PITA... What do you consider "CIFS administration"? What do you want the user to NOT be able to do?
... View more
You are also running about a 2 year old release (albeit with some patches). You might want to check the fixed bugs list against a release of a bit newer vintage... You can find release comparisons for bugs on the NOW site... should be 500 or so fixed since 7.3.1.1 ...
... View more
IP's added to interfaces in ipspaces assigned to vfilers first have to exist in the vfiler. Default routes in vfilers are not mandatory, but you almost used the correct syntax (modulo gateway and hop count) if you wanted to. Local subnets are just a matter for arp. This would need to be added to your /etc/rc file. "vif's" on 8.x are called "interface groups" or ifgrp ... just to avoid confusion... When you configure partner interfaces, just pair up the vlan interfaces. Remember, corresponding ipspaces with the correctly assigned vlans have to exist on both filers or things will go fubar when you failover.
... View more
well, you can always check what ONTap would do... 'aggr add aggrX -n 7' That lists the disks that would be added so you see for yourself...
... View more
What exactly is not working? I have over 50 filers (and probably 30+ vfilers) that I can use ssh keys with openssh. The public key from whoever is going to login as root has to be in <root volume>/etc/sshd/root/.ssh/authorized_keys2 . This key has to be created without a password/passphrase or you have to use ssh-agent on your host to deal with passwords/passphrase. Try to login via ssh first... perhaps with 'ssh -vvv' to see what is going on. You might want to make sure that you have no strange default local configuration that is breaking things for you.
... View more
You might also want to try to upgrade to 6.3.2b Brocade FOS ... 6.2.2 is a bit buggy... Keeping things updated on the ONTap 8.0.x side of things is probably going to be a regular activity as well as it seems to be as buggy as its twin with the 7.3.x tag...
... View more
Hi, Try enabling smb2 via 'options cifs.smb2.enable on' on the filer CLI. This should help on the win7 clients. How far away are your clients? The window sizes might have to be increased a bit still, depending on distance and expected bandwidth. You can only get a theoretical maximum of 128MB/s over a 1 Gbit/s link. If your clients don't get distributed evenly among the 4 interfaces in your ifgrp, then they will share the bandwidth of a shared interface. Normally, (and as recommended probably hundreds of times in the archives), try running 'sysstat -x 1' on the command line to see if you are pushing your disks to the maximum during operations. Did you create the aggregate with all of the disks before you added data, or did you add data before you added the rest of the disks? You might need to reallocate your data across all disks if the latter is true.
... View more
Fundamentally, you don't need to worry about mssql's chatter about stripe sizes and block sizes because these are irrelevant when you have your storage on NetApp. The most important thing is to get the ntfs filesystem aligned within the vmdk file for NFS and additionally that the LUN is setup as a vmware type lun and that the vmfs filesystem formatted with the correct offset (see the article included above) for LUN based storage. Most of the filesystem recommendations in mssql's docs are meant for simple disk arrays. Always follow the storage array's recommendations for filesystem alignment and you will be fine. The rest is just a matter of having enough disks for your I/O demands (operations per second are the primary concern since capacity is rarely a problem).
... View more
I guess it is important to remember the differences between share-level and filesystem-level security and see where you actually need to add any restrictions. Basically, if the file system rights are already correct and you have limited guest access to CIFS (see cifs options) and implement ABE, you have a pretty decent start already. Share level rights are really only necessary in special cases if the filesystem rights are in place with ABE.
... View more
The first thing that comes to mind is to install PuTTy or some similar ssh for windows and see how that goes. I would have thought that windows finally would have gotten a secure remote command line shell included by now...
... View more
How about you post the commands that you are using. Have you tried to use the IP of the source filer to eliminate potential DNS problems? Check the content of the snapmirror log file in your root volume... etc/logs/snapmirror as well...
... View more