Hi, What you want to do is possible, but impossibly documented. There is a TR on RBAC (http://media.netapp.com/documents/tr-3358.pdf) and info in the System Administration Guide but drilling down to the subcategories is no easy task. The best overview I've found is to use DFM/OM where you get an expandable list of role capabilites. To add your domain user/group simple use: useradmin domainuser add DOMAIN\administrators_group -g administrators (or some other group that you create with the desired roles). There is a capability (or there are a number of volume capabilites) that you can assign to a role, then the role to your new filer group, then to your AD administrators group via the above command (or with 'modify' if it exists) . You will need something like 'useradmin role create vol_admin -c "Role for volume admin" -a login-*,cli-df,cli-vol*,cli-qtree* ' which will still go a bit farther than just resizing volumes. If you use FilerView, then you need a bunch of the 'api-*' roles as well. Finding a place where all of these are defined is probably the biggest problem if you don't have DFM. Assign the role to a group 'useradmin group add Good luck.
... View more
Hi, There are a couple of bigger switches so you can avoid having to mangle the entire snapmirror.conf file: options replication.throttle.enable=off options replication.throttle.incoming.max_kbs=unlimited options replication.throttle.outgoing.max_kbs=unlimited ... are the defaults. Just write a cronjob to change the throttles during your "after office" hours and again before "office hours" start in the morning.. Enable it first, of course. The other option is the pricey "Protection Manager" software from NetApp. It does a lot more than throttling, of course.
... View more
Hi, I assume you are looking for your snapmirror.conf file on the destination filer. Relationships that you setup via Protection Manager will not be written to snapmirror.conf . I unfortunately can't help with how you will be able to add compression for such a setup. I don't think you are going to get any great savings with compression. If you really have problems over WAN links, you might look into getting a couple of Riverbed devices. Great WAN accelerators, by far the most diverse and capable on the market.
... View more
Hi, I'll just try to take a stab at this again and hope I'm not irritating the crap out of you. While I do use Mac and I have helped a little with a University installation, I haven't done anything in the last few years. I just use NFS at home. Basically, as you probably know, the streams method of handling files is sort of broken in the unix/windows world. If you were using streams files before, then the migration probably should have been done with a Mac instead of robocopy. You might look at DAVE/ADmitMac and see if a test license is possible. The software does on the client side what your Z-IP software does/did. The DAVE software is sort of supported, IIRC NFS (without the streams fork) is a pretty good option if you have an NFS license. You might be able to obtain one for testing, if not. There is at least one Mac related bug in ONTap 8.0.1, but it doesn't look like it directly applies to you. If you have the chance, running 8.0.1P1 or 8.0.1P3 might help (as a sort of unscientific wild guess). The last option is, of course, to try to sniff the relevant network connections. I don't know how busy your filer is, but you have alternatives on both sides when using a Mac: tcpdump on the Mac side and pktt on the filer side. Wireshark should give you a chance to examine things even as a semi-layman. If not, it is a good basis for opening a case with IBM/NetApp. So, now I'll shut up. I wish you luck.
... View more
Previous threads on such Mac OSX topics refer to this thread: http://www.macwindows.com/snowleopard-filesharing.html#112210b Perhaps some client-side adjustments will help you as well.
... View more
Hi, Like I previously wrote, the best case is to mix different access pattern types and prioritize according to SLA/customer expectation. As you already have burned up a few million on the 6280's, a little more money for some basic training might also be an idea. Puting a 50TB sql database anywhere would scare me, but I would highly suggest that you read all of the NetApp Best Practices papers that you can on SQL before doing anything that large. A traditional backup of a database that size would be prohibitive. Setups using SMSQL and snapmirror/snapvault could get you a lot farther. You should understand a good deal about mountpoint disks on windows and allocating a number of luns for such a job. I often split log and database luns between the two head to assure maximum IO from my investment. You have the "ferrari" from NetApp, but you can easily make it as slow as a tractor if you don't know the system.
... View more
Hi, Basically, if you need this kind of redundance, you simply need to have a redundant switch infrastructure. A correctly configured and redundant core net will deal with failovers of such elements. Assuming you are using STP in your network, new paths will be calculated. The sort of failover you are trying to achieve via some NetApp functionality is fundamentally not a job for a host/server (a NAS unit is just an advanced server appliance) and would be far too complex for a host to resolve. That is why there are network protocols that solve these problems transparantly for all hosts on the network at the same time. ONTap can deal with local link failures (STP can't help here unless the hostis connected to multiple active links) and that is as much knowledge as it really needs to have about the network. Most of this is also described in the Network Configuration Guide and Best Practices "TR".
... View more
Hi, You basically need to find the basic TR (Technical Reports) on the WAFL filesystem (TR-3001, I think) to get a more detailed explanation. Fundamentally, a flexible volume is simply a storage abstraction. Data from all volumes in an aggregate are simply "mixed" together on the disks. When you add disks to an existing aggregate, WAFL tries to fill the new disks up with "new" data until they are just as full as the existing data disks. This can often cause performance problems as the new disks will be "hot disks" . "Reallocation" needs to be performed on all of the volumes in the aggregate to restore a balanced IO picture. The rest is in the TR(s).
... View more
Hi, You haven't really said what you are using the storage systems for, so it is a little difficult to make any specific recommendations. "Performance" from SATA disks is in most instances going to be modest at best. You don't have many possibilites to expand your existing aggregates. IIRC, the aggregate limit is 16TB + parity disks. If you want larger aggregate sizes, you need to upgrade to ONTap 8.0.x and create a new 64-bit aggregate and move all of the existing volumes over to that aggregate, preferrably with snapmirror. The root volume will need to be moved with ndmpcopy. You can then destroy the current aggr0 and zero the disks and add those to the 64-bit aggregate. Most likely your next purchase won't be 1TB disks or even DS14 shelves, as they will already be EOL. Keeping up with the more rapidly changing SATA disk sizes and bus architectures is truly one of the more irritating aspects of using the "cheap disks".
... View more
Hi, How you setup your aggregates is largely a result of knowing your data. What sort of access patterns (user, application, database, vmware) will there be? How will the data be backed up? What sort of growth is expected? You will get the most from the system with a good balance of all of these across all of your aggregates. Storage challenges are really starting to gravitate towards I/O rather than GB. Disk sizes are increasing but the I/O each disk can produce is largely remaining the same. This might work pretty well for unstructured user data (except for direct backups from primary storage), but it can be problematic for I/O intensive applications. Disk sizes also make it harder to put together enough disks for sufficient I/O before maximum aggregate size is reached. 64-bit aggregates are only part of the answer as they also require more system resources to manage them. Using cache memory to short-cut disk access for more frequently used data is basically the reasoning behind the development of PAM modules. Grouping data of different access patterns on larger aggregates with flexshare prioritization is basically how it is supposed to work best. There are probably no perfect setups unless you have a crystal ball or unlimited budgets. In the real world, the ability to react when unexpected performance problems show up is essential. Monitor performance with your own tools or with NetApp's tools. Knowing how raidgroup sizes, disk types, snapmirror, deduplication, reallocation, backup, and rogue applications can affect performance are useful knowledge points to have. A healthy backround in Ethernet and TCP/IP, as well as Fibre Channel will come in handy as well. Best Practices can be very academic compared to the real world constraints of time and money. Making it all work is more often than not, a matter of experience and personal motivation.
... View more
Hi, You won't need to use the "nobody" or "anon=0" hacks if you export the filesystem with, for example: /software -actual/vol/vol3/data/software,sec=sys,root=<admin.host.ip.addr>,rw==<admin.host.ip.addr> What you have basically done is allowed anyone to mount your software share read/write for user nobody... anon=0 is a blank check to read/write for everyone... not a good idea, usually.
... View more
Hi, I think you'll find the necessary information in the subsection on "Managing UNIX credentials for CIFS clients": http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/filesag/frameset.html See also, of course, the usermap.cfg file.
... View more
You will only need 2 vscan servers for redundancy. These can easily be VMWare instances. Virus scanning is much more efficient this way than they way it has to be done on windows. Read the TR. 'vscan' functionality doesn't scan the entire filesystem like you have to do on Windows. It just makes sure that new writes are scanned and that reads are scanned if the vscan server has newer virus definition files. You will be able to manage 10's of TB with just 2 vscan servers. You'll never get close to that using windows servers for file server, not to mention the extra server/power/heat load of having to dig through everything once a week. The only really challenge is learning to tune your vscan software.
... View more
Hi, Performance issues are often more complex than they seem at the outset. You haven't really said much about what the filer is doing: what sort of data, access patterns, volume/aggr setups, deduplication, reallocation, etc... Without banging the "upgrade" drum too hard, I'd jump up to 7.3.5.1 . 7.3.2 had (as all ONTap software inevitably has) a few nasty bugs. 7.3.5.1 seems to running well for me on well-loaded machines with all protocols enabled, except NFSv4. I'd also suggest you try running without NFSv4 as a bit of a bigger general test. NFSv4 is still not as mature and widely-used as it perhaps should be. You might also want to enable logging of your domain controller connections (see 'options cifs' ) and some tracking of DNS response times, as well as performance on your kerberos server. Lags in completing authentication can cause these spikes.
... View more
Hi, I'm guessing you have exported the volumes/qtrees with ntfs security to your admin linux box with root rights. After that you need to make an options change: options cifs.nfs_root_ignore_acl on Then you should be able to mount the filesystems. You won't necessarily see much as far as filesystem rights and won't be able to solve every problem via unix/nfs, but it will get you access. If you want to make sure that things are "cleaner" between cifs and nfs, remember to set "group" permissions and umask (see 'cifs shares' and 'cifs access' documentation) on your cifs shares. It might also be an idea to check out 'options cifs.preserve_unix_security' . I know of no one that has had success with 'mixed mode' qtrees. I'd avoid it like the plague.
... View more
The replication alternatives between NetApp devices are ONTap software dependent. Such migrations from other platforms require using "robocopy" on Windows or rsync on unix or linux (or products of equal capabilities) to move the data over to the NetApp unit. You need an external server for such migrations. For Windows migrations, that is pretty much the only choice, as rsync would mangle file rights.
... View more
Hi, I'm pretty sure that the operation should be unproblematic. I regularly use 'mv' to rename qtrees just "above" the lun. I do this while the lun is online. The same thing works with changing volume names as well. While I haven't actually mv'ed a lun into a qtree, as long as it is within the same volume, I wouldn't expect any problems. The lun doesn't lose any of its own characteristics by such moves within the file system. Otherwise all such copy operations using snapmirror would be very problematic. You could always test it with a volume clone, but I think you're pretty safe. If I had the time, I'd run a quick test, but I think you're good to go. As always, YMMV, hehe... Good luck.
... View more
Since your question is still somewhat inspecific I can only direct you to the system documentation for FC and iSCSI (for 7.3.5.1) http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/bsag/frameset.html Assuming you have a NetApp system and a NOW account, you should be able to find a lot of answers here. In a few weeks, when you are done reading and searching, you can ask questions about what you didn't find but are still wondering about.
... View more
Your question is a little difficult to understand, but depending on whether you are using NFS or a LUN via FC/iSCSI, it can be more or less of a problem. I've never had a problem with NFS based Oracle filesystems, but that doesn't mean it can't happen. Basically, these topics are covered in the system administration guides. For LUN based storage, "fragmentation" can be routinely measured and "de-fragmented" with the 'reallocate' command and is a recommended practices. It can be used on other volumes as well.
... View more
Ok. Let's agree that the marketing idiots didn't give the DataMotion (vs. Data Motion) for volumes the best name. When talking about FC, only the first is relevant, it doesn't require Multistore, just a new enough 8.x, and migrating LUN's between filers is probably more than Multistore was dimensioned for. "Data Motion" is, conversely, not supported in 8.x (the vfiler one). "Data Motion" was a migration (DR) mechanism pasted around volume snapmirror and packaged as "super cool" by the marketing droids. It also required you to move entire vfilers... a bit of a big hammer for day-to-day operations... I'll stand by my definition and you can stand by yours. As long as both 7.x and 8.x are supported, they are in a sense, valid.
... View more
DataMotion and Multistore Data migration are only remotely related by their use of snapmirror. My contention was 100% correct. "DataMotion" as a NetApp production/functionality does not require Multistore.
... View more
FC doesn't really need vfiler migrate, per se. I guess I object to overloading a concept that already does what it is supposed to. It was designed for virutalize disk resources over different authentication domains for NAS storage. I don't think NetApp has any advantages in cutting up ONTap to run "logical domains" or "virtual filers" more than Multistore does now, in any case, not just so that FC storage "seems like" it belongs to the same customer because it is in the same vfiler. I would be infinitely more excited for something like what Spinnacker did before NetApp sort of killed them by buying them (GX has taken years to become more than a niche product). We need to be able to move LUNs (and for that matter all NAS resources) between controllers (and not just aggregates) without downtime (no, CIFS isn't there yet). This is "High End" FC storage. Data Motion is the first step. More FC abstraction is needed or more "scale out" possibilities to be able to migrate between multiple contollers... or from old to new... but I degress...
... View more
Data Motion does not require Multistore. It only requires that if you use Multistore, the volume (LUN) is managed by vfiler0. This makes it easily usable for FC consumers. Your post is either unclear or incorrect.
... View more
Well, before we get all bubbly and stary-eyed about the wonders of Cisco, Virtual SANs/Fabrics/Switches have existed for some time without the Cisco name on the equipment. The VSAN abstraction is most certainly an abstraction that Cisco had to make (encapsulated within vlans) to insure the no-loss delivery mechanisms that FCoE required. Really, the marketing guys have done their job, but crack open a book on FC or just an administrators guide from Brocade, and the reality looks a lot different...
... View more