Thank you, this is really useful information. Which version of DOT are you running (to exclude possibility that this was fixed in teh latest version)? Regarding spare disk selection - yes, this behavior is documented, see How does Data ONTAP select spares for aggregate creation, aggregate addition and failed disk replacement?
... View more
give "-anon" parameters value as '0' Be aware this can seriously break access to exported filesystem. I hit this when setting up Simpana (SnapProtect) Oracle/SAP agent that must be run under specific group. Using anon=0 will change user ID but leave group ID that for anonymous user. Using root=XXX (or -superuser in case of cDOT) fixed it.
... View more
My current information is, not every tape device is supported on CNA; unfortunately I could not find definitive list or support matrix. From technical point of view, CNA does not support FC-AL (it is dropped completely from 16G standards), so tape device must support point-to-point mode. It is not uncommon for older tape drives to only support FC-AL. Another thing to check is connection speed. 16G SFP do not support speed below 4G.
... View more
Accessing the share by IP (\\ip-address\sharename) works fine, but \\original-cifs-server-name cannot be found {"Windows cannot access ..."}. From the Windows client nslookup on the name returns the DR IP. This usually means that Kerberos authentication fails. Using IP will fall back to NTLM, bypassing Kerberos. I think that resetting machine password (vserver cifs password-reset) should fix it, but this will in turn block your original SVM.
... View more
Is it possible that you changed flow control while port was online? You need to restart port for changes to take effect, as far as I remember.
... View more
It was posted few days ago ... in yet another discussion thread instead of fixing link in the document where everyone expects this link to be. This is exactly what I meant when saying "hunt for it all over this new community site".
... View more
Yes, each file is located inside of a single constituent, that's right. So it will scale with number of VMs (or better vmdks) but not for single vmdk.
... View more
Documentation for 8.3 will be posted soon. WHERE will it be posted? Again in this thread? And how are we supposed to find it later? Should we now hunt for it all over this new community site? Why was it necessary to destroy very well working community and create this mess? The page http://community.netapp.com/t5/Developer-Network-Articles-and-Resources/NetApp-Manageability-NM-SDK-Introduction-and-Download-Information/ta-p/86418 tells us to download documentation from http://community.netapp.com/community/interfaces_and_tools/developer/apidoc. Please fix this page and insert valid location there.
... View more
So are you saying if I create the snapvault from the destination and point it to the source when snapvault runs it won't do another baseline copy, as that was my concern. There seems to be some misunderstanding. Assuming destination and source mean SnapMirror destination and source, you do not "point SnapVault to SnapMirror source". You create SnapVault relationship with source == SnapMirror destination and leave it running. Because SnapMirror destination is read only, you cannot create SnapVault schedule there; so you create SnapVault schedule on SnapMirror source and it gets replicated to SnapMirror destination. All of this is described in NetApp documentation.
... View more
Sigh ... you again give different names as before, so I only can assume that CENWBFASDR02 means the same as FASDR12 in your previous post. In this case it is as I already told you - you need to setup SnapVault snapshot schedule on FASPROD12 (taking your example). These snapshots will be replicated to FASDR12 by SnapMirror and then picked up by SnapVault.
... View more
Yes, it is possible. You would need to shutdown source filer to remove shelves anyway, so there is no need to offline anything - halt filer, unplug shelves. On destination filer you can attach shevles online;l new aggregate will be detected as foreign and you will need to online it. After that just resync from source to re-create SnapMirror.
... View more
Now what I want to do is create a snapvault relationship on the DR/destination filer of the volume NETAPP_prf2_esx_w2k12_sas_01 I'm sorry, I do not understand it. SnapVault is created between source and destination. What is source filer and qtree and what is destination filer and qtree?
... View more
They will get a different LUN number on one path only.. This will get interesting. They already have different LUN numbers on different paths, at least if cabled according to guidelines. This is common source of confusion in MetroCluster configurations. When you say offline, how do you mean? In a Cluster failover? No, takeover does not help here, both controllers always see both paths to shelves. I meant really stopping both nodes.
... View more
Disks on ATTO bridge are named <slot><port>.<loopID>L<LUN>; the <LUN> part enumerates disks starting with the shelf to which ATTO bridge is connected in physical order of shelf attachement (for each side A and B). If you change shelf order in a stack, disks get different LUN numbers. It is not a problem for offline change; but for online - who knows. I appreciate if you could post support response.
... View more
I run: run -node node1 -command fcp topology show ... is the command "run -node node1 -command fcp topology show" not supposed to have shown me more details? It is supposed to show you exactly switches in fabric. You can try "fcp topology -z" which will show ports that are zoned to filer target ports. But I'm not sure how it interoperates with LIF at this point (i.e. - is nodeshell aware of additional WWPNs).
... View more
Yes, you can connect new stack online. Make sure to double check shelf numbers before doing it; changing them after you have connected shelves will most probably require downtime.
... View more
I would avoid doing it online if possible. It will change disk names, and it was reported to cause issues. If you absolutely must do it online, open case with netapp and try to assure support and get step by step action plan. It could work if you disable paths through lower bridge and let filer "forget" about them completely; but I'd double check with netapp.
... View more
It's documented in the cDOT 8.3 Physical Storage Guide. Documentation is vague. It says "The size of the partitions ... depends on the number of disks used to compose the root aggregate" but so far root aggregate always consisted of three disks (all disks are zeroed but only three of them are used), so it is quite a big deviation from established practice. How many disks are used now? Also what happens with replacement disks? Is it necessary to manually partition them?
... View more
8. Access the boot menu (Ctrl+C) and select Option 4. 9. When the node reboots and starts zeroing disks, it will create partitions on the internal shelves Can you chose - with or without partitions? Or will it always create partitions on internal disks? The point is, if you want to extend aggregate with external disks you better have the same size; or are all disks right-sized to data partition? ## LEAVE AT LEAST ONE DISK PER NODE where the DATA partition and the ROOT partition are owned by the same node ## This is required for the system to be able to write core dumps during a panic. It must own the whole disk. Is it documented somwehere? I'd expect cDOT to refuse operation in this case, at least without explicit --force.
... View more