For LUNs, the security style of the volume/QTree does not matter. We use UNIX as default as well, but it also works with NTFS. Since you don't have a share or export on that volume (I hope ) and nobody can browse it, it's really no problem -Michael
... View more
Is it still so that you cannot get access to the FPolicy SDK easily? I remember I had to jump through hoops and still didn't get it (i.e. I had to provide estimates on how much money we would be making off products developed with the SDK, why we can't just use the normal Manage OnTap SDK, and the like) If the process is now easier I might try my luck again in getting it -Michael
... View more
Just a reminder, if your filer is not installed by someone with a certain NetApp certification, you'll be running in an unsupported configuration. You can make so many subtle errors during the initial configuration which might initially work but give you suboptimal performance or hard-to-diagnose problems later on, I wouldn't recommend it. Just as an example, assigning ALL disks to one controller is not possible because the other one needs a few disks for its root aggregate. And once the disks are in an aggregate you can't get them out without wiping the filer complete, which is quite annoying if you already have data stored on it 😉 -Michael
... View more
Well it's no big secret that NetApps are not the best storages if all you need is sequential streaming I/O. You can get faster streamings with a cheap RAID array. But I wouldn't recommend running VMs on that, because the few SVMotions/Clones you'll do will not outweigh the typical day-to-day usage where random I/O is predominant. NetApp filers excels at random I/O where nothing else can match it's performance. I have to agree with stemmer here, 220mb through 2 links is pretty good (110mb/sec is the practical cap through one GbE link, not 125) -Michael
... View more
Normally you should see the serial number of the disk there. Please provide some more information, is this a FAS filer or a V-Series? What disk shelves (DS14 or DSx24x)? ACP enabled? What does "sysconfig -r" show? how about "storage show disk"? -Michael
... View more
Sounds as if snapshot autodelete threw away your baseline snapshot. You should always disable snapshot autodelete on the source volumes forQSM/SnapVault. I'm afraid there's nothing you can do if the snapshot is indeed lost on the source side. Your only option is to re-start the SnapVault from scratch do "snap list <volname>" on the source and destination, and try to find a snapshot with the exact same name on both sides. -Michael
... View more
You cannot have aggregates larger than 2.00 TB on a FAS250/270. This is a design limitation because of the little RAM these boxes have. You might try to upgrade to 7.3.6 anyway, because there was a change (for 2000 series at least) where the limit would be the net size of the aggregate (i.e. what you get by "df- Ah"), with OnTap 7.0 the parity disks also counted towards this limit. So after an upgrade you might be able to squeeze in two more disks into your aggregate. But in general, you should think about upgrading to a newer system 😉 -Michael
... View more
Your volume needs to be NTFS, but you still need a basic username mapping (this is currently a known bug), so you should add that too. Also you might need to define an export-policy. Try re-creating your vServer by using the System Manager 2.0.1, this generally works pretty well -Michael
... View more
This is a debug message. Debug messages are most often only useful for (as the name implies) debugging filer related problems. There are lots of other debug messages that you probably don't want/need. The only solution is to remove all debug messages if you don't have a specific reason why debug logging is enabled (probably working with NetApp global support on a support case) You can also always re-route the "debug" messages to a different logfile (say, /etc/messages.debug). To do this simply edit your /etc/syslog.conf file. This is what I usually do, I also don't want "info" messages spamming the ssh session every time a SnapDrive/SnapManager operation starts, for example. -Michael
... View more
The number does not mean anything specific. It's simply "the higher the worse". 23 is pretty darn high so I'd strongly suggest you run a realloc on that volume. I've seen customers with a value of 13 or so, who, after doing a realloc, observed a 2x speedup in their DB access times. The threshold is always 4, this is the point where the filer suggests duing a reallocate. i.e.1 to 3 is good, 4 to (unlimited) is bad Hotspots tells you if you have hotspot disks or not (i.e. your data is not distributed equally across all data disks in the raid groups or not). 0 is okay in that regard (everything >4 is, again, a reason to start doing reallocs) I wouldn't worry about the time it takes or the CPU, both is not really a problem (realloc runs in the background so other processes always get priority). -Michael
... View more
This is normal and has to do with the way the filer calculates the util% metric (it's not the average over all disks but only the disk with the highest utilization, for example). This is only a display problem and it is even documented (either in the man-page for the sysstat command, or in the admin guides, don't remember where) You can ignore these bogus lines. However, having a disk utilization of 100% for longer periods is in itself a problem that you should look into. You might have a hotspot disk or some other problem -Michael
... View more
Heh, if you're really short on that 4000€ then I guess you'll be in for a little shock when you see what SSD shelves cost. With support and all, one SSD shelf costs >100.000€ (list price is even higher) We have hundreds of NetApp customers and not a single one has SSD shelves (last I've heard is that there is < 5 customers in the whole of Germany who need and have SSD shelves) -Michael
... View more
Try using "set-defaults" in the boot loader (and, just to make sure, do a manual "unsetenv bootarg.init.boot_clustered" as well, as Scott suggested) This should get you back to 7-mode. -Darkstar
... View more
This is all in the man pages for the sysstat command: The CPU column shows the highest CPU utilization between all your CPUs, so it's perfectly normal that it moves to 99% from time to time. If you use sysstat -m, the ANY colum shows the percentage of time (within the interval) that at least one CPU was busy (i.e. not in the idle task) Disk Util% is the same as CPU utilization, it shows the disk that has the highest utilization (i.e. if you have a hotspot disk you will continually see high values here) Cache age is the age of the data that has been recently removed from the cache. It is normally the age of the oldest data item in the read cache, but it can get lower when the system requires memory for other tasks (snapmirror, dedup, etc.) and needs to free some more buffers hope that helps -Michael
... View more
Hi, this is odd. Please check your network switch logs for errors (packet drops, flapping MAC addresses, etc.) as in general, SMB2 performance should be AT LEAST equal to SMB1 performance, in almost all cases even better. Also, if you have a vif/ifgrp configured on your netapp, check that it matches the port config on the switch (i.e. link aggregate/etherchannel/trunk if your ifgrp is of type MULTI, and NO link aggregate if your ifgrp is of type SINGLE) -Michael
... View more
ACP is currently not supported with the ATTO bridges as it doesn't understand the disk numbering (shelf/bay mapping) and gets confused. I guess it will be fixed at some point but right now you should not use it -Michael
... View more
ACP must not be used with the ATTO bridges. I don't have a public document ready stating that, but I have seen it multiple times in some tech presentations. The problem is that ACP gets confused by the LUN/disk numbering of the ATTO bridges and might (in extreme cases) power cycle the wrong disk. It can't hurt to NOT cable ACP, you can always add the cabling later if it turns out to be supported with 8.1 final or something Better safe than sorry -Michael
... View more
Well, every enterprise storage vendor sells their disks at seemingly expensive prices, NetApp is no exception there. You get what you pay for. However I don't think that the firmware is to blame, because the newer SATA disks also use the "stock" firmware and no special NetApp firmware anymore. You could also try editing the qual_devices_v3 file (however they are checksummed so it's probably not as easy) because there you can define "disk aliases" etc. I'd strongly suggest NOT doing these kinds of experiments on a production system because you risk losing your data on unsupported disks (and NetApp certainly won't accept any liability if you do) -Michael
... View more
You could try putting your WD disk into /etc/qual_devices to see if it works. D WDC WD1002FAEX-00ZSS 1D05 512 Let me know if that helps -Michael
... View more
the Seagate disks happen to work because NetApp also sells them, and so they are recognized. WD disks on the other hand have never been used (AFAIK) in filers and thus there's no config information for them in Data ONTAP. Needless to say that you're running without any support if you do what you're doing there but I think you know that already 😉 -Michael
... View more
80mb is pretty much for CIFS, you'll have a hard time increasing this even more. NFS and iSCSI can hit 100-110mb/s on a single link, but I have never seen CIFS coming close to that number (even with SMB2) -Michael
... View more