Try 'exportfs -c 10.10.8.63 /vol/vol0/soft ro' Also try reading the exportfs manpage. I assume you are mounting the share as root. Also try turning on 'opiton nfs.mountd.trace' on the CLI and watching the messages file.
... View more
It looks like you have some basic concepts confused somewhere. SnapDrive basically is trying to help you do a few things "automagically" that you can easily do by hand on the CLI. Read up on the "vol" and "lun" commands. There should be no reason to copy things when clones can easily be made. Why your snapshots are not "consistent" is also probably a matter of reading a bit more and finding out where the configuration mistakes are.
... View more
It's a shame you didn't get the CIFS license with the filer when you bought it. Using samba on linux in vmware with an nfs mount is just a big can of worms that will eventually probably bite you in the back side.... Have fun...
... View more
Hi, Perhaps I'm missing something obvious here, but you authenticate basically to a "domain". All of the domain controllers in the same domain have the same information. You can set up a "preferred" controller, but the results should largely be the same, modulo some 2008 functionality. The connection to the current domain controller won't be broken without a reason. The CLI gives you the option to reset this connection. Give the manpages a read. Even with all of this, it is still just a "preferred DC" so there is no absolute deterministic behaviour there. If you don't want to use the 2003 controller, turn it off.
... View more
Hi, It is a bit of a shame that you already assigned all of the new disks to controller 1 and already built up an aggregate on them. This all would have been much easier without that. Fundamentally, you just need to offline aggr1 and reassign the disks to controller2 (unassign, assign -f, whatever)... Controller 2 will discover the disks as a "foreign" aggregate (there'll be some noise until you get all of them assigned) which you will need to manually online. The containing volumes will all still be there. CIFS shares and NFS exports and such won't be. You might want to move your "vol0" volume over to the 2TB disks as well to free up 4 of the 400MB disks. Moving the "root" volume is a simple task described in the docs (ndmpcopy is your friend) . After that you can kill the old "vol0/root volume" and the 3 disk "aggr0" and re-assign those disks to controller 1. That's the job in a nutshell.
... View more
The answer would be "no". Be prepared to open your wallet. NetApp also sells encryption hardware that you can use a number of different ways, also for SAN setups.
... View more
Hi, Another hackish way of doing this is just to "move" the volume. The "tries = 0" stuff has not always worked. My hack works like this: If your OSSV destination is, for example, /vol/vol1/my_windoze_server_c_drive, then... 1) do 'vol rename vol1 vol1_temp' 2) then 'vol create vol1 aggrN 2g' 3) then 'snapvault stop /vol/vol1/my_windoze_server_c_drive' , you will get an error message about the qtree already being gone, but that doesn't matter. The configuration is effectively erased. 4) then we get rid of our dummy volume 'vol offline vol1' and 'vol destroy vol1' 5) then 'vol rename vol1_temp vol1' 6) then 'snapvault snap unsched vol1' and confirm. Now you have a qtree that snapvault no longer cares anything about. Whatever CIFS shares you had will have followed the volume name changes, so no worries there.
... View more
The internal "wiki" link isn't terribly helpful for those of us who aren't wearing the shirts with the embroidered "N" on them... (i.e. non NetApp employees) ...
... View more
Hi, Basically, you really don't need any of the fancy configuration options everyone is so eager to get you to use here. I use all sorts of dedicated nets, etc, and never needed any of this. The first error you received was simply because you didn't use the destination hostname that the filer uses. Basically, you just need to use the normal hostname on the destination and the "rep" hostname on the source. If your routing is correct (i.e. they are on the same subnet or other "net" routes are setup correctly) it will use the correct interface to contact the source filer. This is just how routing works. This should work with the nets you want: source_filer-rep:vol1 destination_filer:vol1 - 15 * * 1,2,3,4,5 No other complicated magic is needed. Just set the correct source filer hostname and make sure that the routing causes the destination filer to use your "*_rep" interface to setup the connection. No complicated configuration necessary. Sorry you had to witness so much guessing.
... View more
Hi, Well, I enabled this about a week ago and have noticed some higher peaks for outbound data, but nothing really miraculous. Like Michael says, there are (unnecessary) risks involved here if the remote pools get into trouble somehow. We actually had an incident the following day where a disk failure basically turned the filer into a brick for about a minute while things tried to sort themselves. This is probably only tangental to some other problems with disk maintenance and such, i.e. we provoked it faster with the change, but we have seen this on disk failure before. Anyway, it's an on-going case and we'll see what the great minds can discover... I have running graphs for our ISLs (mtrg still things 8Gbit/s is 532MB/s though) and we do spike quite a bit on reconstruction events. I always wondered if we should implement TI on the SAN switches and separate disk and FC-VI traffic, but no one at NetApp has ever managed to give me a straight answer and implementing it on a running system (that has had far too many other problems already) doesn't seem to be anything that my local NetApp people seem to want to help with either. We set this up mid-year 2009 when TI was still sort of a "cool thing to have" and the Metro-Cluster documentation was already such a mess that it was enough just to get things setup according to revised settings that I had to squeeze out engineering. We've had problems like this before as well with remote plexes playing an inordinate role in the general health of the filers. I/O requests from secondary consumers (in our case NFS and iSCSI) will just queue as long the situation with the remote plexes doesn't allow write completion and you end up with a lot of disk time-outs for your SAN hosts and things just really go south. The path selection algorithm for the backend fabric seems to be much more primitive than most anything on the market except perhaps early VMWare and Windows. Thanks everyone for the advice, in any case. It is sadly disheartening to have to fight with the dark sides of MetroClusters... probably even more depressing to have to fight with NetApp support, but these are the conditions we are sort of forced to live with...
... View more
No chance you are you are just hitting problems with backup and virus scanning? If you actually look at 'sis status -l' then you will see how long the actual sis processes run. I don't think sis is doing this to a 3240... 🙂
... View more
Hi, I had a little discussion with a colleague this morning who had started using this setting on a stretch metrocluster at another location, but hadn't measured the performance differences. He sort of thought that it should improve things because not all the reads come from the same plex and the plexes are very closely co-located. I have a fabric-attached metro cluster with probably 800M fibre distance and wondered if anyone had any documented experience or recommendations on the use of "alternate" vs. "local" as far as read performance goes for fabric-attached metroclusters. There are times when I could use a bit better read performance...
... View more
I'm not sure what ZFS raid solutions are going to get you except more expensive storage. If you mirrored to 2 filers, then perhaps a bit more resiliance, but that won't be cheap either. ZFS was not a terribly good idea for Oracle a couple of years ago, and I haven't followed along enough to know if that has changed much. It uses a ton of memory and is nowhere as mature as UFS or VxFS or even NFS, but it has a lot of hype behind it. ZFS snapshots were pretty lame and mirroring was 'scp', more or less. Not exactly an enterprise solution. Not sure how well any of this works with SnapDrive... If you don't use the EFI lun type, you are probably going to beat up your filers as well with a lot of extra I/O.
... View more
The options manpage should be a logical reference point for such questions: ndmpd.tcpnodelay.enable Enables/Disables the TCPNODELAY configuration parameterfor the socket between the storage system and the DMA. When set to true , the Nagle algorithm is disabledand small packets are sent immediately rather than held and bundled with other small packets. This optimizes the system for response time rather than throughput. The default value is false. This option becomes active when the next NDMP sessionstarts. Existing sessions are unaffected. This option is persistent across reboots. You are a NetApp employee? ... Google will help you with the general socket option and effects of tcpnodelay...
... View more
If you previously had data on the LUN, then you will need to use space reclamation before you can resize the lun, I believe. I believe there is also a limit to how far (in percent, iirc) you can reduce a filesystem with snapdrive as well and only on win2008, iirc. I'm no wizard on the windows add-on software, so perhaps you will get a better answer soon. At least everyone has a better idea of what you are actually having problems with. Always be as specific as possible with your questions and answers.
... View more
What software and command is giving you the error message? Answering "its empty" to the question "What is 'It' [sic]?" is just not helpful.
... View more
If you read what the original poster stated (which is also documented elsewhere), upgrades of SATA shelf firmware can take up to 70 seconds and is disruptive if you don't have the newest firmware and your shelves multi-pathed. You obviously did not read the precautions and your server disk time-outs are set too low to perform shelf firmware upgrades which disrupt I/O. Increase your disk/device timeouts on the server side and read the documents on firmware upgrades.
... View more
Hi, 1. Routing "in" to your iscsi subnet is a matter of the default gateway for that subnet talking to your iscsi interface. If the router gets a packet for a locally connected subnet, it will send it to the IP in that net. This has nothing to do with the filer settings. The return packets will exit out the filer interface that has the default route and you will get "asymetric routing". You will have to blackhole traffic into the iscsi net on the router to stop this. 2. Your traceroute example should only work if "options ip.match_any_ifaddr" is set to "on". Turning this off will stop traffic from going out interfaces with an IP that isn't the same as the traffic source. 3. If you want to stop iscsi into interfaces that you don't want to offer iscsi on, then disable iscsi on those interfaces with "iscsi interface disable <interface>"
... View more
Hi, I guess I would run 32-bit aggregates and just VSM things over. There is talk of plans for a 32 to 64-bit in place migration of aggregates later in the 8.x tree that could save us all a little work. I guess if you are using extremely large disks then this isn't the most attractive idea either, but unless you can shuffle some disks/aggregates from some site closer to your sources, then there aren't a lot of options besides off-lining your source data for some hours at a time.
... View more
Hi, What is the output of 'vif status vif_fas02'? What is the output on the cisco switch of 'sh interface status po10' (iirc)? 3750's have often need to have etherchannel mode forced to "on" instead of desirable. My memory fails me, but setting vlans on Po interfaces might need to be done on the physical interfaces... You might try to do 'sh trunk' and see which vlans are active and forwarding as well. All that I can think of at the moment...haven't been a net admin for a few years now. Hope it helps.
... View more
Hi, Try running 'environment status shelf 0a' and 0c ... or any loop that you have problems on and read the output carefully. Make sure that you have up-to-date shelf firmware and no failed disks in the system(s)('aggr status -f'). Also make sure that your shelves are cabled according to the documentation. More complex problems can be dealt with after you have gone through this information
... View more
Hi, I'm still a little confused here. Basically, your FC HBA's require a basic driver. I assume that Linux has those kernel drivers and perhaps Qlogic has some proprietary drivers. This alone, however, is not going to make multipathing work. Multipathing requires some sort of a daemon to monitor at least some basic functionality of the "paths" between server and storage, just to know if they are working paths or not.. Now I have no clue on the present status of multipathing in the GNU kernel, but there seems to be some evidence on the net that it works, but has nothing directly to do with NetApp, per se, although you will need to download the FCP Host Utilities for Linux (RedHat) and read the instructions probably, to get it to work. It would seem a quick search on Google would have gotten you this much... Perhaps this will help: http://thjones2.posterous.com/netapp-as-block-storage-fibrechannel-for-redh Good luck.
... View more