a) Data ONTAP 8.x defaults to SSH with other insecure protocols (telnet, rsh) disabled. You can change port with “options ssh.port”. This is even documented, you know … b) I do not have experience with PS Toolkit. Try and report ☺ c) You really want DFM (OnCommand) Operations Manager for it. My latest information – it is free of charge today, but you still need DFM Server license which is separate zero cost option. Do not forget that LUN is black box for NetApp; there is optional DFM component FSRM, which can generate space utilization reports from host side. I do not think it is free of charge though Correction - it is part of OnCommand Core, not an optional component, and so is immediately available. d) You need HTTPS for almost any external management component. CLI (SSH) is indispensable. Also make sure you have configured SP – you will need console for any activity involving controller reboot. Regarding space management with LUN – read TR-3483. Let it sink for some time and review in case something is unclear. This provides in-depth explanation how NetApp manages space in presence of LUN (be not confused by title, it is applicable beyond thin provisioning). If you have Windows – use SnapDrive for snapshots.
... View more
mailbox is used as secondary heartbit between controllers in HA pair as well as to keep track of which controller controls disks in case of takeover. For these purposes systemid is used. When doing head swap, systemid changes, so mailbox content becomes invalid and needs to be reinitialized.
... View more
When you assign the disks in maintenance mode, the aggregates are imported as “foreign aggregates” & offlined. No. I have never seen this happened. You must be doing something entirley differently. Please describe step by step procedure you are following. From the very beginning. Every single command invocation. All of my migrations typically have new shelves & the systems are powered up initially to burn them in. This starts to sound like previously described issue. So you have new root first. Then you add old disks. This effectively means LUN mapping present on old disks is lost. The correct precedure is to do pure head swap first using old disks and then add new disks online. You will get foreign vol0 which you now can happily destroy. This will preserve complete configuration. OK, just to avoid misinterpretation if someone reads this out of context. Things related to hardware - LUN serials (tied to sysid), HBA WWNs will or may change. But software configuration will not change.
... View more
Are you trying to setup permanent DR relationship or just one off migration? For a single migration run I guess it could be possible to use volume-to-qtree QSM (/- => dest:/vol/new/qtree) and then use “vfiler create -r /vol/new/qtree” to recreate vFiler on destination. Keep in mind, that you may not have any qtree inside this source volume – it will not be replicated.
... View more
I've talked to NetApp support & PS engineers, they've both told me they've never seen this happen. I have performed a couple of head swaps and I've too never seen this happen. So you must be doing something differently. E.g. dorseyfoga booted with new root first which apparently was the reason NetApp had lost existing mappings, as they are kept in root volume. You seem to have offlined aggregates with LUN before doing head swap. So it is quite possible that NetApp removed existing mappings because no LUN was found when you booted for the first time after head swap. Why did you need to offline anything in the first place? Doing head swap is really simple. Just connect old shelves, reassign disks in maintenance mode and boot.
... View more
By default NetApp sends two traps (nearlyFull and Full) as volumes reach 95% and 98% respectively. This does not need custom trap creation and simply assumes SNMP is enabled. And it definitely works ☺ You can change thresholds globally or per-volume as described in https://kb.netapp.com/support/index?page=content&actp=LIST&id=S:1011645 Show output of “snmp” and “snmp traps”. Oh, and “options snmp.” ☺
... View more
a) The command is called “version”. Surprise ☺ b) NetApp binary is installed on internal boot device (CF in the past, UFM in current models). NetApp configuration and “spool area” is on hard disks in root volume. Upgrade procedure is fully documented in Upgrade Guide (and also there is Upgrade Advisor as part of ASUP). But in high overview – you place new version in spool area on root volume, issue command to unpack it and issue command to download new kernel to boot device. All three steps can be combined in single command invocation. c) I suggest you start with Core Commands – Quick Reference document (part of Data ONTAP documentation package) You are correct about LUN creation steps. LUN type is absolutely unrelated to whether you do or do not have flash cache. I suggest you use SnapDrive to manage LUNs, it automatically sets correct LUN parameters.
... View more
Regarding time/date configuration On Data ONTAP 8.1RC2 it offers RDate and SNTP as timed protocol. Both are wrong, 8.1 (may be even 8.0 already) supports NTP as the only time sync protocol. There is no way to separately set time zone without re-setting curent time. So even if time was correct before entering dialogue, it will inevitably be skewed, because it is reset to the value when dialogue had been started. Oh ... and when you change timezone, time remains unchanged. It means, when you apply it, time is set to completely wrong value. E.g. initially after installation timezone is GMT. You have 17:00 GMT. You change timezone to something else, say GMT+4. Time remains the same. After applying time is set to 17:00 GMT+4, i.e. 13:00 GMT. Effectively time was reset by 4 hours. And please understand that not all users actually know their time zone shift; they just select timezone based on location.
... View more
Nitpicking. It is normally not possible to directly mount snapshot copies, because they are read-only and even when mounting ro host needs to perform some filesystem recovery. So it necessary to create writeable clone (LUN clone or flexclone).
... View more
I do not think there is any way to find this information. Even failed disk registry does not list disk slot after it has been removed. If filer sends ASUP to NetApp, you can try to open support case. ASUPs are kept for a long time, if not forever (someone from NetApp please correct me), so support guys may be able to pull information from half year old ASUP.
... View more
No. Disk has not "became" unowned - disk is not owned as long as it is not assigned. Ownership is physical disk property, not specific slot property. Whoever replaces the disk is responsible for assigning it to correct controller. NetApp supports automatic disk assignment (which is default) but it requires that the whole loop (stack) is owned by single controller.
... View more
Well … ASUP is sent from a single node, is not it? It does not collect ASUP from partner and sends, does it? So it can speak only for itself. The simple fact that it is causing confusion is indication that it probably is … confusing ☺
... View more
Yes, if you ensure that LUN is mapped to only one server at any given time, it is OK. But do not forget, that it adds human factor and so is more error prone. In general, that is what cluster is for. NetApp documentation on NOW site provides plenty of documentation including step by step guides. I do not think reproducing them here is necessary.
... View more
What makes you believe that fabric shelf attachment is supported at all? The only supported configuration is FC shelves in Fabric Metrocluster.
... View more
a) It is unrelated to NetApp. Yes, you can map single LUN to two servers, but then you are responsible for arbitrating access to it. Mounting file system from two hosts simultaneously (that is not prevented otherwise) will kill your file system. W2k8R2 is better at preventing unintentional corruption, but still allows you to shoot yourself in the foot. b) You are not required to have dedicated root aggregate. c) Two controllers are two independent hosts working in failover cluster. You can create VIF only within single controller. Second controller will take over address if configured correctly – see “partner” option in “ifconfig” command. d) No, NetApp does not support SAS connection from hosts.
... View more
hw_assist does not transfer any data, it is just for status check. E.g. if you switch off external power for one head, RLM has just enough residual capacity to sent notification about power failure so takeover starts immediately instead of waiting 15 seconds for missing partner. Or in case controller has fatal HW error that brings it down abruptly.
... View more
Plugging each e0M in separate switch does not make it more redundant. You still have single interface plugged in single switch. e0M on one head is completely independent from another. It is possible to use any address (reachable from RLM/SP) for hw_assist, so yes, you can use vif for it. Which still leaves you with single RLM port plugged into single switch. Do not overengineer. hw_assist is completely optional, it could be handy if it works, but if not - you still have the same behaviour as for years before. Everything will continue to work.
... View more