I cannot give you NetApp link because after moving KB to new platform all bookmarked links became invalid and nothing can be found (or they cut off my access leaving it on the very rudimentary level). But here you are: http://mysysadmindiary.blogspot.de/2015/05/removing-foreign-aggregate-on-netapp.html
... View more
Does the above command creates VLAN 710 on the netapp ? This creates virtual (tagged) VLAN port for VLAN 710 over physical NIC e0a. It will help if you try to learn cDOT networking architecture and use correct terms. Start with Network Management Guide, Understanding the network configuration chapter. should I use: dvilcdot1::> network interface migrate
migrate migrate-all to migrate the LIFs to the ports ? You may need to add new VLAN ports to Servers broadcast domain first. Then you can migrate LIFs using above command and do not forget to modify LIFs home ports as well. What I wonder - if you already reconfigured switches for new VLAN you should have lost connectivity over old VLAN. If you still did not commit switch configuration you can do it NIC by NIC - migrate LIFs off first port, change VLAN configuration on switch, create new VLAN port on filer, migrate LIFs back to new port. This should be fully transparent as long as switch configuration is correct.
... View more
Unfortunately, officially CFT is not supported for entry systems. I have hard time to believe in any technical restrictions; but unless NetApp will be willing to do it, your only solution is to temporary head swap from 2240 to midrange, do CFT to another midrange and head swap back to 2552. May be you can convince NetApp to loan you two HA pairs with shelves ... 🙂
... View more
Try removing LIF association using "network subnet remove-ranges -force-update-lifassociations true". You should then be able to modify individual LIFs and then subnet, associating LIFs with subnet again.
... View more
You can find max capacity and disk count in HWU. You can add new shelf online and migrate data to new aggregate but you will need downtime to remove old shelf (and may need downtime to switch access to new copy as well).
... View more
I am not aware of anyone trying to install Windows on FAS - it would be very expensive excercise for very little gain. As for direct shelf connection - you are on your own here, sorry. Search Internet, you may find some reports of earlier attempts.
... View more
DS14 is not array - it is dumb shelf (JBOD). You may be able to connect it to FC HBA (there were success reports) but you will need to use software volume manager to have RAID like functionality. I am not aware of any external RAID array with disk FC interface. And such configuration will not be supported by NetApp. So yes, you almost sure need FAS to make full use of DS14.
... View more
In general 7MTT copy based transition supports LUN. There could be limitations depending on target cDOT and 7MTT versions; consult 7MTT documentation for details.
... View more
I'd say, "No route to host" is pretty self-explanatory. You need route to remote network (and remote network needs route to local site). It can be default gateway or it can be network/host specific route, but this is standard IP connectivity, nothing specific to cluster peering or cDOT in general. I am not able to create a cluster peer because all the IP's are in Default IP space So far there is no indication it is related to IP space. You may need different IP space if different destinations with the same IP addresses are reachable via different gateways. But there is not enough information to speculate about it.
... View more
It is technically possible to use 1GbE ports for cluster interconnect, but this is not supported configuration and you should understand that it limits total throughout that can be achieved. As this is not supported, it is not offered by setup tool also. For testing purposes you can build such cluster using CLI.
... View more
Well, it's possible that new disk is faulty (sounds like one channel does not work); it is possible that original fault was not in disk but in shelf (ESH/IOM/chassis). We do not have enough information and you should contact NetApp support and tell them that disk replacement did not help.
... View more
What are you trying to achieve? The first reaction would be "there is no point in doing it". As a second thought, I can imagine valid scenarios, but is better if you explicitly tell your design plans.
... View more
Set "/" unix-permissions to something like 0711 (of course make sure owner is root) and create mninimal export-policy that only allows ro mount, but no rw, no root etc. Then nobody can list content of /, but still explicitly enter subvolumes or mount them.
... View more
Yes, you do. Clients must be able to traverse junction tree starting from the top (i.e. "/"), which means "/" must allow at least read-only mount. The only way to harden it would be to restrict visibility of files/directories under "/", so that even if clients mount it, they won't be able to see its content.
... View more
Nothing is created - rather cDOT sees aggregates which had been create on these disks on some other filer. See KB 1013046 for procedure to destroy such aggregates.
... View more
Your question unfortunately sounds very confusing. "Cluster LIF" has very special meaning in cDOT - as already explained it refers to backend interconnect between cluster nodes and it must be on dedicated physical ports that cannot be shared with any other traffic type. But what you actually mean is likely "cluster management LIF". This can share port with any other data and management LIF of any other SVM. By default cluster management LIF is named cluster_mgmt. Plain "cluster" is rather poor choice as it is highly ambiguous, especially when used as adjective ("cluster IP", "cluster LIF"). On your place I would consider renaming it, e.g. to default cluster_mgmt that makes it obvious what you are speaking about. And find some time to make yourself familiar with cDOT network organization and terminology 🙂
... View more
DIsk count is total number of disks in both plexes, so it must be even in this case. Pool assignment is responsibility of administrator - system does not enforce any restrictions on which disks are assigned to which pools. Having two shelves on the same stack in separate pools is useful and actually often the only configuration possible in small metrocluster. Of course pools should be as independent as possible to provide maximum redundancy, so if you have enough equipment to distribute them over different stacks - even better. It is up to you to decide. SyncMirror is always between disks of the same node, if I understand your last question correctly, else please explain what "home run" is.
... View more