From 8.1 7-mode release notes: “The "Configuring iSCSI target portal groups" topic in the "iSCSI network management" chapter includes information about enabling ALUA. This topic has to be removed because you cannot enable ALUA on iSCSI groups.”
... View more
1. Source should be "irv-gdc-san1a:/vol/vol0/-" (notice leading slash in qtree name) 2. Destination must be qtree. You can’t QSM non-qtree data onto non-qtree data on on a whole volume For root volume migration ndmpcopy is more than enough.
... View more
takeover can mess up and copy the config of the other partner instead Takeover does not copy any configuration file. Could you explain what do you mean? Any reason you insist on having both base vif and vlan? As I understand you are setting new configuration, so you could just as well use two vlans. This is something that has been working for years.
... View more
This is the first time ever I hear about such behavior. Would be interesting to see - netstat -an from filer - network trace for connection to port that is definitely not in LISTEN state in above output; you could generate it on NetApp using pktt tool. Yes, I confirm it (tested on 8.0.2P4). Interesting. There is undocumented option ip.tcp.limit_rsts which sounds like it could be related; but I suggest you open case with NetApp and update this thread if you get this resolved. Message was edited by: aborzenkov
... View more
No, that’s not possible with or without snapdrive. No filesystem driver can handle the case when data on underlying device is changed without its knowledge. You would really need CIFS share for it, this could be replicated and you will always get the latest content.
... View more
One more note - to use -p aggregate must have been created under 7.2 at least. For filers with long history that could be an issue (I did see it).
... View more
FilerView is part of Data ONTAP until version 8.1. It is possible in principle to have working Data ONTAP without FilerView; to make sure system is completely and properly installed you could perform update to the same (or latest) version. RLM is separate piece of hardware unrelated to Data ONTAP; you cannot access Data ONTAP via RLM (only indirectly by connecting to console). You need to configure one of “normal” filer interfaces. e0a usually is always present ☺
... View more
Unfortunately, NFS does not really help as long as VMDK are not deleted; nor UNMAP support in ESX5. Here we need explicit support from hypervisor first. I am not sure whether ESX offers any right now; I hope it is on their roadmap.
... View more
Even if you find the right IP you still need user/password and if you do not have it, you need console where you can reset root password. Console cable is pretty standard - any Cisco console cable would do; settings are 9600,8bit,no partity. Press Ctrl-C for special boot menu and chose to set root password. Or reinstall ☺ You should look for Data ONTAP manuals, not for hardware manuals.
... View more
BMC network configuration is done using “bmc” command from running Data ONTAP (normally done as part of initial setup). Did you read System Administration Guide?
... View more
(BTW - the provided link is slightly broken, as two last extra characters got appended to it) Communities web interface is slo-o-o-o-o-w. Edited.
... View more
BMC is independent piece of hardware and is unrelated to Data ONTAP; it runs besides and in parallel to it. Once more - the only network protocol that is supported by BMC is SSH.
... View more
And, BTW, recently I noticed some KB article about DRL (Dirty Region Logging) or similar; it appears that when disk is taken offline DOT maintains log of changed blocks and resyncs them later.
... View more
I have seen it in upgrade guide for 8.0.3 (https://library.netapp.com/ecmdocs/ECMM1253884/html/upgrade/GUID-1A70BD32-D54D-443F-9E5E-C97D8E420189.html😞 In Data ONTAP 8.0.2 and later releases, automatic background disk firmware updates are available for non-mirrored RAID4 aggregates, in addition to all other RAID types. Something like this must be mentioned in release notes, but I could not find it there either. Addition: Automatic background disk firmware updates not enabled for non-mirrored RAID4 aggregates http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=594453
... View more
You also artificially reduce cache (and NVRAM) by half for the same amount of disks. If anything, that will probably have more significant impact. There is never too much cache memory …
... View more
No, that’s not possible. The bare minimum is 2 disks for root aggregate/volume. And this goes against all best practices (raid4 without spares). Why do you want to throw away half of computing power you have?
... View more
I am not at all convinced that having switch would be advantageous. Right now you can use short shelf-to-shelf cables for the most parts, which leaves only 4 long SAS cables for stack-to-controller. With switch you would basically need the same long SAS cables for every shelf; this would quickly become an issue due to cable inflexibility. And ACP cabling is very simple once you understood the basic principles. I initially made an error in trying to memorize it instead of just understanding it ☺
... View more
As far as I understand ESXi automatically uses multipathing for all LUNs, including boot LUN, so in this case there nothing to configure. As for statement in document you mention - they simply mean that to ensure boot failover both HBA must be able to access boot LUN. VMware guides basically state the same using different words: “Multipathing to a boot LUN on active-passive arrays is not supported because the BIOS does not support multipathing and is unable to activate a standby path”. In case of IBM DS3000 and DS4000 this translates to AVT, but this is IBM specific. In case of NetApp with single_image cfmode you can access LUN from any controller, so it is not an issue.
... View more