Hi, You just need to remember that the rc file is just a batch file. All you need to do is add valid commands in the correct logical order. You might start out initializing your interfaces first. The media option isn't really necessary. I'm not sure how the flow controll "send" recommendation crept into best practices, but setting it to "full" never caused me any problems because it is a matter of negotiation with the switch anyway. I don't have a lot of setups where flow control actually is used on a regular basis. This just seems confused and it seems you don't know what the commands are supposed to do: ifconfig e0b `hostname`-e0a 10.254.114.7 netmask 255.255.0.0 mtusize 1500 trusted -wins up <--- This is already in the vif and can't work ifconfig e0c `hostname`-e0b mediatype auto flowcontrol send <---- You can't and don't need to assign IP's to interfaces already in a vif ifconfig e0d `hostname`-e0c mediatype auto flowcontrol send <---- ifconfig e0e `hostname`-e0d mediatype auto flowcontrol send <---- The "hostname" in backticks is doing a hostname lookup (it looks in /etc/hosts for isnldnn07-e0N ) to get an IP to be assigned to the interface, which you don't need. You shouldn't be trying to assign IP addresses to interfaces which are already in a vif. Just run ifconfig e0b flowcontrol send mtu 9000 <--- for whichever interfaces you want in your vif with mtu 9000 After you set some basics on your interfaces, create the vif: vif create vif0a e0b e0c e0d <--- It really seems like you can't decide which interfaces are supposed to be in this vif. It seems like you just edited "e0a" to be "vif0a". You can call a vif interface whatever you want up to a certain length. This is broken too: ifconfig vif0a `hostname`-vif0a 10.254.114.8 netmask 255.255.0.0 mtusize 9000 trusted up Again, you don't need the `hostname` part if you already have the IP there (that is a pretty big netmask too, by the way 65k hosts in your net?). You really need to decide which interfaces are going to be in the vif and concentrate on getting the rc file exactly right. The "routed on" is probably not needed in 99% of cases, especially if you only have one gateway and no RIP services in your net. If you are running a cluster (HA) setup it gets disabled anyway. You can normally safely just leave this out. More often than not, it just causes strange problems in nets with crappy old RIP services running. There is a lot of brokeness here. I'd like to hand out reading glasses to some of the others that have looked at this... If you don't understand something, feel free to ask. Just remember: setup the basic interfaces, create the vif, create any possible vlans, assign your IP's to the necessary "logical" interfaces, setup your default route, done. S.
... View more
Hi, I guess the easy answer is just to call your local NetApp rep. I'm sure they can get you a card. The reboot is necessary probably for software/firmware changes to the onboard "card" and perhaps to initiate some driver subsystems. You are sort of adding a new card and for most any OS, that means attaching drivers. This is all just a guesstimate from the outside. I guess this is something that could change over time with the ONTap 8.x kernel assuming one really felt the need to develop on-board hardware that could be powered down. It's not a very common task, but sort of a pain when it has to be done.
... View more
Hi, I guess there is a lot that I don't know about the setup, but is there any special reason why you think that segregating links FC links to your SAN storage will give you better performance than just allowing all traffic over all of the links? If you want to prioritize I/O on a bit more planned basis, then you can take a look at the 'priority' command, aka "FlexShare" in marketing-speak. That would give you a bit more fine-grained controll over what the NetApp prioritizes on the front side. I'm not yet very well versed in vseries and I don't know if using 4 links will give you a general boost or not. I guess that is something that one can look up easily enough. Hope this helps.
... View more
Hi, I guess it would be easier to give advice if you could explain what it is you are attempting to achieve with the segregation of paths to your SAN storage. There might just be a different method to achieve the same results.
... View more
Hi, I guess all of this sort of has to start with what you intend to do with the "second" controller. Have you bought additional disk shelves for a/the "second" controller? Are you planning to move CIFS/NFS shares over to the "second" controller? Anyway, attaching the existing shelves to one of the controllers isn't any different than any other upgrade. One needs basically the same ONTap version and a new install on the root volume after things first come up. Disks need to be assigned to the new controller head during the first boot up and one probably should run a diag kernel before even getting that far. How you intend on splitting up your shelves or using the second controller depends a lot on what you intend to do with that second controller. Assigning a couple of disks for a quick raid4 root aggregate+volume for the new controller isn't really that big of an issue, but it is sort of non-standard. Even if software disk ownership is now the standard, the concept of keeping disk shelves sort of segregated physically via cabling is still the recommended norm (although I have seen some interesting variations that work). So it all comes down to what you want to do first, then one can help you with the "how".
... View more
hehe... well, because you can't. The disks are assigned to a controller and disk operations are done by that controller until a failover operation occurs. It would seem you need to crack open some documentation or take a class or two Good luck.
... View more
And a new tip it is... considering that the TR was issued 16 months ago... I don't know how widely "FlexShare" is in use, but I have found it to be very useful even with less loaded systems. Without actually having done a thorough scientific analysis, I find that it sort of evens out the bumps and spikes on the I/O "road". It adds a nice complexity to I/O scheduling that works very well for multi-purpose (the marketing droids call it "unified storage") filers. I've advised its use here in the communities a few times to solve performance situations. Anyway, good to see that someone at NetApp is sacrificing their Sunday.
... View more
Well, I'm no huge fan of using filerview. I have no idea what you bought or if all of the disks are assigned or not. What the filer is telling you is what the filer knows. Whatever else you know you still haven't stated. You have currently one spare disk on that controller. "Maxed-out" doesn't seem to be the correct term... apparently "used-up"...
... View more
Yes, and the system was right. In your list of disks, there is only one spare. Now that we have established the facts, why is this fact so confusing for you?
... View more
How about you add a little more information... None of us has a crystal ball. The first idea that comes to mind is that the system really does just have one spare disk ... or at least of the type that the system wants to use.
... View more
I guess it might help if you just paste the whole (sanitized) command like that you are using to start the baseline initialization and the output of snapvault status and perhaps 'options snapvault'. Any relevant information from /etc/log/snapmirror might help too... I can't really see enough information to give very useful tips at the moment.
... View more
Hi, You can get a simple setup working by following the instructions in the documentation here: https://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/tapebkup/frameset.html You don't really need backup software per se, if your setup is small and you want to script it yourself, but it won't be much fun to restore. It will, however, allow you to see if the filer sees the tape drive and if things work on a fundamental level. You haven't mentioned much about your physical setup, so it is a little difficult to comment on much more.
... View more
Hi, As others have tried to explain, snap reserve actually does "reserve" space for snapshots from the total size of the volume. Snapshot usage can grow out of the reserve limits, but the active filesystem can't "grow" into the space reserved for snapshots. You either have a lot of file turnover or you have inadvertently added and deleted a lot of file during your migration or whatever you have been doing on that filesystem of late. 'snap list -V <volname>' (on the cli or however you prefer) will show you a bit more about which snapshot is using the space. 'snap delete -V <volname> <snapshotname>' will remove a snapshot if you need to get rid of an "unneeded" snapshot. As others have pointed out, you can setup some triggers to do some of this "automagically", but you may have the negative side-effects that the system chooses to either delete or grow when you really didn't want that. The system can't really guess what you want, just do what you tell it, of course. Good luck.
... View more
Hi, Have you added the IP/Hostname of the filer to the snapvault.access file (or wherever it is on the Windows side, it's in the docs.) on the W2k3 server? When you get the messages that the transfer is not initially successful, then you probably have a problem contacting the server on some relatively immediate level. Snapvault logs to /etc/logs/snapmirror. You might find more info there. There are also a few knobs for increasing the log levels on the server side. Good luck.
... View more
Hi, The frameset URL is a little too unspecific, unfortunately, if you can't click down to the CIFS section on "Managing CIFS Services" The more exact URL is: http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/onlinebk/frameset.html About this task Data ONTAP automatically sends a message to connected users after you enter the cifs terminate command. However, if you want to send a message without stopping CIFS service, for example, to tell users to close all files, you can use Server Manager or the Data ONTAP command line to send a message. Some clients might not receive broadcast messages. The following limitations and prerequisites apply to this feature: Windows 95 and Windows for Workgroups clients must have the WinPopup program configured. Windows 2003 and Windows XP Service Pack 2 clients must have the messenger service enabled. (By default, it is disabled.) Messages to users can only be seen by Windows clients connected using NetBIOS over TCP.
... View more
Hi, Could you just post the output from 'df -h' from the filer cli? (or some equivalent operation). I am still not exactly sure what you are asking. You might also find the answer to your question (and many others) here: http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/onlinebk/frameset.html
... View more
Hi, You might want to take a look at this info about CIFS broadcasts. There are some limitations: http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/filesag/frameset.html
... View more
Hi, I guess if you are only interested in volume snapmirror, the schedule looks ok except for one caveat: You probably (and I am not sure 100% if this is required, but I seem to remember it is and the manpage examples always include it) should add a "-" (dash/hyphen) between the last volume and the time schedule, i.e.: srcfiler:vol1 desfiler:vol1 - 0,15,30,45 * * * ^
... View more
Hi, You can setup your DFM server (or HPOV or whatever else you might have) to accept snmp traps from your filers. Then setup DFM to fire off a mail or some pager mail. The lag you are seeing is often because it tries a few times to reach the filer before it declares it dead (or some DFM bugs...). You may also have lags in your mail system that won't be helped even if you send your traps to the DFM server. All of this is documented in the DFM documentation. Hope this helps a little.
... View more
Hi, If you have a chance to generate an NMI-panic via the "RLM" (can't remember what the new name on the 32xx series is at the moment) then you can get a coredump to send along with your case and then this should get cleared up in a much more concrete way by engineering. It will "crash" the filer "on purpose" to get a stateful coredump of what is going on. If you have a cluster, it will failover and you could, in theory, do this in a lightly loaded production environment if the host timeouts are set correctly. YMMV (your mileage may vary 😉 ) Good Luck.
... View more
Hi, This might seem counter-intuitive, but the sizes of the destination volume for volume snapmirror will always "look" the same if the destination is of equal size (+ a few MB) or larger, no matter what the actual size is. You need to use vol size -b (if I remember correctly) to get the actual size of the destination volume. You can always resize the destination volume manually, but if you go below the source volume size (or adjust the source to be larger than the destination) the mirror won't update. So if you want to adjust the test volume down a bit, then just 'vol size testvol 112m' and you're done.
... View more
Hi, You might want to take a look at "FlexShare" (if I didn't get the marketing gobeltigook wrong), i.e. the "priority" command as a good stop gap measure (the manpage is an ok place to start). It might even just solve the problem for you. There are a couple of TR's (3459 is one) on this as well. Basically, this will let you set I/O priorities per volume on a global scale. You don't need any extra licenses or anything, A quick run through for you, depending on your setup, would be: 1. Get an overview of expected I/O priorities for all of your volumes on both cluster members. I used a quick Excel table with volume, description, level, system, cache columns. 2. Fill in the Excel table. You can then parse that for some simple command lists/scripts to implement this. 3. Enable priority globally with "priority on" on both filer heads (controllers). 4. Enable priority for all volumes simply with 'priority set volume <volname> service=on'. At this point you haven't really changed a lot, but the I/O will be a bit more "even". You might want to try some quieter point in the day to do this. All of the volume level and system priorities are set to "Medium" at this point (the default). 5. If you really just want to affect the NDMP backup problem, go through and set your "system" priority per volume to "Low" on the volumes that use NDMP for backup. Then NDMP backup should get its I/O prioritized lower than normal user/volume access. The rest is just a matter of knowing which volumes need higher/lower access. Remember, the system priority is still relative to the volume priority, so a volume with volume=High and system=Low will still get its I/O prioritized above a volume with volume=Medium and system=Low. Priority is still smart enough to avoid I/O starvation of volumes (as far as I've seen) with lower priorities. You should be able to plan and implement this in a few hours. The support case will definitely take infinitely longer, unfortunately. Your backups may well take a lot longer now. You will probably need to track how things are going and look for errors. I've implemented this on some pretty heavily loaded filers in production without noticeable problems. Read the TR a few times until you get the hang of it. The FlashCache TR (3832) will also tell you how to get better performance out of PAM cards (if you have any) by toggling the "cache" setting. Good luck.
... View more
Hi, I think you need to be careful not to confuse the maximum number of snapshots that SMVI will do simultaneously and the maximum that can be retained on a volume. Assuming your datastores are set up to one datastore per volume, then you can retain up to 255 snapshots (255 per volume). That number has been pretty constant for many years. I don't think it has changed in ONTap 8.x (yet).
... View more
Hi, I guess I am missing something. What is "Use Command Line" tool? If you want to just login to the filer, just use telnet or ssh with the "root" user (at least to start with). Please add some additional information so that we don't have to guess at what you mean.
... View more