Hi, Moving the disks over should be relatively unproblematic. There are a few things to remember that might make it easier. Frankly, if the 3210 isn't running in production yet, I'd upgrade to 8.0.1P3 first. 1. If you want things to come up right away, make sure you have the newest shelf and disk firmware in place. Often enough that is most easily done with an upgrade and the point made with having your 2050 at 7.3.5.1 is probably a good idea, but you can also manually upgrade such things by downloading and copying over the new firmware to their respective directories in the vol/etc directory. Also make sure that your shelf types and modules are supported under 8.x 2. You are going to have to assign the disks to the new systems. This can be a bit of a pain if the 3210 is in production, but you can script that. I normally just script a list of "disk assign .... -f " commands and paste them on the command line. It's low-tech, but it allows you to watch how the system reacts. 3. The new aggregates will first show up as "foreign". You really want to make sure that you have no duplicate volume or aggregate names. If you happen to move vol0 with the disks, this isn't really a problem at all because the system will just rename it to vol0(1) (if I remember correctly) if the 3210 is running, if it isn't running you will have more problems booting as the system as there will be two volumes marked as "root". Just take the aggregates online and the system will start modifying the filesystems to be 8.x filesystems. 4. You sort of really need to be sure that you aren't going to have to go back to a 7.x system as that will be a bigger bucket of worms trying to revert the disks back to a 7.x system. Ideally, you should have the systems at the same OS level so a fallback to the previous hardware is possible without a lot of extra work. 5. Make sure that your new systems have had a "diag" run done on them (boot the diag kernel) and that the nvram battery is at full charge. Basically, you should be confident that the 3210 systems are in good working order before the migration. Much of this might be in the procedures that you found. There should be no real difference in a migration of external shelves from a 2050 or a 2040. I hope at least some of this helps. Good luck.
... View more
Hi, There are probably other TR's on capacity planning and such, but in general, having separate aggregates for different usage probably has no real value in most cases. Your basic "exportable" unit is going to be the volume or volume+qtree pair anyway. I/O is going to be a matter of spindles up until you saturate the available bus/cpu I/O on the controller "head" anyway, so you might as well go for larger aggregates. If you get the chance to look at spec.org benchmarks for NetApp, then you will see that raidgroup sizes probably don't exceed 18 disks (although I have used up to 28 where the access patterns and storage utilization requirements made it feasible) and aggregates had large numbers of disks (50+). Multiple raidgroups (a parity grouping) in an aggregate are of course, possible, up to the maximum aggregate size (which varies depending on if you are using ONTap 8.x, >= 7.3.3 (iirc) or < 7.3.3). Larger aggregates give you a bit more flexiblity and the only real downside is a large scale problem with the disk subsystem (which I have seen happen, but is highly unlikely) that could perhaps have more impact on a large aggregate than multiple smaller aggregates. Multi-pathing your disk shelves can help here depending on the type of failure and what your possiblities (number of disk ports) and requirements are. Deduplication up to this point is a volume operation. It can certainly be I/O intensive up to a point (starting it), but after that it just checks new data for existing duplicate blocks and also deduplicates new duplicate blocks after their fingerprint (and subsequent sanity checks) as been added for that volume. As Baijulal hinted add, expanding aggregates on the fly can be a source of I/O bottlenecks because of how WAFL distributes data across disks. New disks will be "filled" until they are like the remaining disks. If too few disks are added, they will become "hot" disks because all new data will be added there and might even remain there if the data change rate is low. This means you will suffer under some strange I/O limits for certain operations involving this "new data". Thankfully, NetApp has the "reallocate" command to alleviate such problems (not something you find from a lot of storage vendors). Setting up new systems is always a bit of a black art because you rarely know exactly how things will evolve over time and making changes later on can be problematic, depending on your SLA requirements, host (consumer) systems, etc. You haven't really said much about the type of systems that you are using or what type or number of disks. I/O from disks is going to become more and more of a problem over time generally as disks get larger in size but generally retain their I/O per second performance. At some point most of us will have to start using PAM modules to be able to get enough disk utilization where I/O requirements are more demanding. Good luck.
... View more
Hi, Try reading 'man showmount' on one of your unix systems. It won't give you a perfect statefull picture of exactly which machines have mounted a volume, but it is a start.
... View more
Hi, To enable logging of cifs "login" errors in the messages file you are first going to have to run 'options cifs.trace_login on'. A little study of the file access documentation would probably be of use as well. What sort of shares are you trying to access and how are you trying to access them? What does 'cifs shares' say about the shares that you are trying to access? Does your filer have multiple hostnames or IP addresses? Multiple IP addresses in the same subnet? There isn't a lot of information to go on here.
... View more
Hi, This is a known bug. You guys really need to learn how to use the bug tools on NOW. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=344812
... View more
Hi, If you have a system for testing and training then you certainly have production systems as well. Use the serial number from one of them to setup a NOW account where you will find all the documentation that you need. It isn't entirely relevant that you can do all of this on an E(xtra)M(oney)C(harged) Celeron, because a NetApp is an entirely different beast. Other than this, there is probably no official/legal way to give you access to documentation. If someone had the foresight to order the paper/printed documentation with the system originally, then you have something to work with there.
... View more
Hi, The Host Utilities are required for all operating systems as part of a supported configuration. You may very well get your migration completed without them, but things like upgrading the NetApp software (something that you probably will want to do at least once a year at least to get new features) is going to go terribly wrong because your disk time-outs may not be set to the required values. The values set by the Host Utilities may very well invalidate your supported EVA configuration as well, so there is a bit of a dance to get done here. Installing the Host Utilities as soon as possible after your migration would be probably the best possible compromise. You might also want to reorganize your storage to take advantage of the additional features you get by using NetApp. Copying directly from a "dumb disk" system like the EVA to a NetApp will probably tie your hands for some more advanced features that NetApp can offer. Make sure you read the chapter on Block Access Management before you proceed. One of the most important things to get right is the LUN type. This is a pretty essential part of getting the correct performance out of your LUN's. The names of the LUN types are NOT as simple as they seem (unless you are a HP-UX administrator and only have one type) and you probably should read up on what is recommended. It will mean a 15 to 20% performance hit if you get it wrong. As long as you are setting up the disks manually before you start mirroring, you shouldn't have problems. And exact block copy of the EVA storage would result in significant performance problems. Reallocation (a sort of LUN defragmentation) is something that you also should familiarize yourself with or your performance will slowly degrade over time and you'll be the target of complaints and general abuse from your application people. Like most things, the learning curve is a bit steep at the beginning, but it gets easier over time.
... View more
Comparing HP-UX to Windows is a sort of apples and oranges comparison. The fundamental reason for the reboot (even on OS's like Solaris) is to get the various disk time-out values that NetApp requires for a supported configuration enabled. Now it may very well be that the EVA settings are pretty similar, but if not, the first time you start doing upgrades on the NetApp and have the incumbant short stops in I/O from the storage system, you are going to lose disks and crash applications. It may very well be that HP-UX can do such kernel settings without reboot, but Windows relies on Registry settings that get imported at boot.
... View more
Hi, I have played with PM a little and this was one of the main reasons that I didn't continue with it. I'm not sure how well this is documented (DFM/OM/PM docs have never born much fruit for me unfortunately), but if PM says I have to "re-initialize" every relationship (I had 500+ at one point) then integration is going to be more than scary for most people. Wouldn't it be a good idea to change the error message here? ...or some part of the import procedure so that an update is attempted to clear this "baseline error" state before scaring the crap out of the admin?
... View more
I guess that depends a bit on how you do your snapmirror setup. In a situation where you just want sort of identical copies in both locations, then volume snapmirror is probably a better idea. Then you have block identical copies on both source and destination in all snapshots. You can always break the mirror and roll back to the last snapshot to start up your systems in the state of that last backup. How much data you can lose and how fast you have to have things going again sort of determine how often you take snapshots, besides the load on the systems. If you really need very high availability, then a MetroCluster is probably the best idea (no 2040 MetroClusters), but rarely necessary for vmware setups. There are disaster recover mechanisms (site recovery something or other) for making such migrations less painful, but these aren't free either. The best thing is probably test this on a test datastore and get to know what to expect and what you will need to script or document in procedures. The destination filesystem will be ok if you have a snapshot there that is based on a consistant snapshot on the source. That means volume snapmirror or snapvault or a combination of qtree snapmirror and some scripting.
... View more
Hi, From my experience, and things might have gotten a bit better since I had to do any migrations, mixed mode is problematic and I would avoid it. Basically, the security style on a qtree tells the filer where to ask for user information. The file rights are admittedly different, NT-ACL's can be more complex if everything is used and have more "bit" as far as file permissions are concerned than unix files. The easiest method is to use ntfs file security wherever you mostly access the files from windows and let your unix users be the same as your windows users. Then the unix users will have access to all the files where that same windows user would. Using unix security style will reduce the complexity of the permissions, but then also a certain level of security. If you have, for example, Oracle databases via NFS, then ntfs security styles are probably a bad idea, it just complicates authentication (and will cause problems if AD authentication is every problematic... like if time sync isn't within 5 minutes, etc....). Sharing out the "ntfs" (or for that matter "unix"= qtrees is just a matter of exporting them to your unix servers rw, with authentication "sys" if you don't use kerberos or such and then mounting them from the servers either permanently or with automount.
... View more
Hi, Basically, if you already have a snapmirror relationship established (it's only minimally different if you don't) and an update fails, the next scheduled update (or a manual update) will simply continue from the last checkpoint and continue to complete the mirroring of the last snapshot. If you want multiple copies on the destination, you simply need to schedule normal snapshots on the destination (hourly, nightly, weekly). This isn't necessarily ideal, because the two snapshots won't necessarily contain the same information. This works ok for normal user data (unstructured CIFS or NFS) but would be problematic for data that has to be "crash consistant". The tool for this, at the moment at least until QSM and Snapvault are merged, is to use Snapvault, because Snapvault creates a local snapshot on the destination when the transfer is completed (and can start SIS jobs automatically after the transfer is finished as well). You could probably script somthing that would transfer and then create a snapshot, but you'd have to create your own delete/retention schedule as well.
... View more
Hi, Again, I don't think that this sort of blanket "dissing" of reallocation is helpful. With very high or very low deduplication rates (would be nice for NetApp to actually setup some basic thresholds) reallocation will simply be moving contiguous blocks together that belong together. If I had, for example, 50 VDI images deduplicated with a 90% savings (and I do in places) then reallocation is basically just moving the datablocks together... the pointers wouldn't matter for the most part. If I have a "CIFS" volume with 10% deduplication and perhaps lots of pretty large files, then reallocation is probably going to help. There is, admittedly, a large area where reallocation probably won't help a lot, but it would seem very plausible that for the low and high deduplication cases that reallocation should give the same performance increases as for the "un-"deduplicated volumes. I, like many of us, would really welcome more research from NetApp on the when and where and why's for these two seemingly contradictory file system operations.
... View more
You are also going to need to install the Host Attach Kits (or whatever the name is this year) from NetApp to get a supported configuration, which means you are going to have to boot to get the registry settings in place at some point as well... If you really have applications that never can go down, then you probably are going to need something in front of the NetApps... or you will have the same problem in about 3 years... Almost hard to believe that anyone has 100% uptime apps on EVA's anyway... and the jump to the monster 6280's will be like going from a Prius to a Ferrari... 100% uptime systems probably should have been MetroClusters anyway... smaller systems, more redundance...
... View more
Hi, The blanket statements about reallocation here aren't really helpful. Now, I'm no big fan of having multiple LUN's per volume in the first place, unless they have largely the same content and there is some impending reason to deduplicate heavily. The make a lot of other potential adjustments and operations a lot more difficult. If you do have a situation where the remaining 3 LUN's are largely identical and the deduplication rate is quite high (I'd guess over 70%) then you probably could benefit from reallocation (the same is probably basically true with very low deduplication rates) because you are largely just lining up blocks that would normally be allocated closely anyway. If these are database LUN's then you probably aren't going to see deduplication savings in the first place.... and deduplication might just be reaking havoc on your performance already... Things like VDi or VMWare luns (system disk, static data) can give you better performance deduplicated with PAM cards... Anyway, as always, YMMV...
... View more
Hi, Besides what the other post said, what have you tried? Try reading up on the 'netdiag' and 'ifstat' commands on your NetApp. How long is the WAN connection? ... in milliseconds... The TCP window size for SMB 1.0 typically very small, so high-latency links will suffer a lot more. You can manipulate the filer side with the option cifs.tcp_window_size (17520 is the default) but I am not sure it is going to help or not. SMB 1.0 is just broken. Read up on calculating window sizes for TCP connections. YMMV. Check your switchports if you are seeing collisions in the ifstat output. By the way, 7MB/s is still over 55Mbit/s, so you aren't doing too badly if the link is shared with other traffic.
... View more
Yeah... this is a good idea... cut-n-paste answers... How about you get everyone to do a PCS analysis on their machines that don't yet have PAM cards to see if they could also see some benefits... Or actually just get the SunCorp guys to give a little less "marketing monkey" information and give some live numbers on how much faster their VMWare machines boot (they were like 95% virtualized)... Give my $100 to the poor SunCorp guy that actually digs up the better stats...
... View more
Hi, You should be able to use ssh too if you put your public key in /vol/vol0/etc/sshd/root/.ssh/authorized_keys2 . You'll need to have 'options ssh.pubkey_auth.enable on" set as well...
... View more
Hi, You haven't really said what it is that you are mirroring, even if it is hinted at with the volume names. With normal unstructured data and Qtree Snapmirror, it is all fairly simple. Just turn off the volume guarantees and run "sis" on the volumes (space savings come first when the "pre-sis" snapshots expire. With Volume Snapmirror, you basically need to have the same sizes on the source and destination and many of the volume options are "replicated" to the destination, so adjusting space reservation or volume guarantees is a matter of setting them on the source. If you are mirroring LUN (block) data, then deduplication on the source is probably not going to be ideal in many cases and not possible on the destination, if I remember correctly. If you thin provision LUN volumes on the source (turn off volume guarantees and space reservation) and follow the Best Practices (vol autosize, etc), then you will see the savings on the VSM (Volume Snapmirror) again when the "pre-sis" snapshots "roll out" (expire). There's a TR on thin provisioning. It works as long as you adjust your volume sizes automatically (irregular snapshot growth) and keep your aggregates from going full... 90% is basically full if you expect any performance.
... View more
Hi, What you really need to do is read the Block Access Management Guide... Windows 2008 has native MPIO... if that is even necessary for your setup... http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/bsag/frameset.html (should basically be the same for 7.3.2 ... ) Remember to install the Host Attach Kit... (or whatever they call it this week)... It's all in the docs.
... View more
Hi, I can't really imagine how you would "do it" with a normal server either when it is already part of a domain, except for maybe adding all of the users as local users... You have to understand how authentication works. The file rights are going to be "looked-up" by going to the domain controller because that is how authentication is set up. You can make shares that map to IP adresses, but pretty much anyone will be able to do what they want there. It would be a bad hack of usermap.cfg and setting Share rights with SIDS, but generally, this would basically be, as you say, a nightmare to administer... but, it's your life, hehe... The iSCSI suggestion would take 20 minutes to setup if you have a server with enough resources to be a file server and a bit of a network. Trying to force it any other way is going to give you lots of gray hair, I would guess...
... View more
The only other compromise would be to setup a windows server in the other domain and use some of the storage via iscsi... Not quite the same thing with all of the NetApp advantages, but if you have extra disk capacity and no budget to build a fat windows server, you could at least use the disk... If you had a cluster, you could put one partner in each domain as well, but for a single system... I don't see any other options...
... View more
Hi, Unless the domains are trusted, you are pretty much out of luck. One authentication domain per filer (vfiler with Multistore) Good luck.
... View more
Hi, The only option that I can see is to hack the /etc/asup_content.conf file. Admittedly this comes with a big fat warning not to do so, but if you have a test system and don't mind hanging it because you didn't quite decipher how the autosupport parser deals with the configuration in this file, then you should be able to trim things down. It seems to be relatively simple. Don't forget to make a backup of your final working copy as upgrades will probably overwrite this file. As always, YMMV. Good luck.
... View more