Snapshot "size" is amount of changed or deleted data. vMotion will delete data on source. So yes, it will cause snapshot "growth".
P.S. I mean of course Storage vMotion. Normal vMotion works with shared datastore, so it won't not have any effect.
... View more
The former us configuratipn setting, the latter - run time state. Reservation may be not honored e.g. if there is not enough space in containing aggregate.
... View more
SnapMirror is asynchronous so any failover will mean data loss (and with 100Mb/s line it will mean significant data loss). This must be conscious decision of administrator (actually, such decision normally should be taken on higher business levels) after evaluating impact of data loss. If you want automated failover, you need to ensure synchronous replication and this is what MetroCluster does.
That said, ONTAP does not provide any means to automate SnapMirror failover; you will need to use some host-based solutions that monitors systems and initiates it. I remember some support for SnapMirror in Veritas cluster, but this was long ago for 7-Mode, not sure what current state is.
... View more
It is not "going into snapshot". If it is already in snapshot, there is nothing you can do except deleting snapshot(s). If it is not in any snapshot (i.e. qtree was created after most recent snapshot had been taken) then deleting it will not cause it magically be moved into existing snapshot.
... View more
@COG wrote:
Broadcast domains enable you to group network ports that belong to the same layer 2 network.
What I tried to tell - Layer 2 network is not always synonym for VLAN. Today Layer 2 network may well span multiple switches with different VLANs each.
@COG wrote:
Also could you elaborate on this - "If they are in different IP network - this implies full scale migration, as it affects much more than just broadcast domain configuration on NetApp".
You will need to add or change IP addresses of your SVMs, so whatever needs to be done to make clients use new addresses.
... View more
@COG wrote:
I am relatively new managing Netapp storage systems.
This question has very little to do with NetApp, it is mostly networking question The only thing you need to ask your networking team - whether new ports will be in the same logical IP network or not. If they are in the same network - they should be added to the same broadcast domain on NetApp side. If they are in different IP network - this implies full scale migration, as it affects much more than just broadcast domain configuration on NetApp.
@COG wrote:
new vlans on the new core will be layer 3 vlans while the old ones are layer 2
It is absolutely unclear what it means. There is no such thing as L3 VLAN. It may mean routing between different VLANs over L3 (IP) interfaces or it may mean bridging different VLANs over L3 nertwork (VXLAN as example).
@COG wrote:
1. Follow this route above and join the new nodes to the same cluster as the old ones but have broadcast domains and failover groups of the new nodes that are different from the those of the others in the old vlans. So failovers will be between 4 nodes instead of 10.
This implies different IP network on new nodes.
@COG wrote:
2. Avoid the use of vlans all together in configuring the lifs and configure lifs on top of ifgrps.
You probably misunderstand how it works. Every bit that is carried by switches belongs to some VLAN. Whether this VLAN is exposed to connected host (tagged VLAN, VLAN interface on NetApp) or implicitly associated with physical port (untagged VLAN, physical port/interface group on NetApp) is irrelevant. Of course you as NetApp admin must know how switch is configured to match NetApp side, but it has nothing to do with original question - whether interface belongs to the same IP network (and hence logical broadcast domain) or not. In case of VXLAN local VLAN numbers on two switches may be different, but both ports still will belong to one and the same broadcast domain.
... View more
To repeat my asnwer - now you need to update to the same version (using "software install" or "software update") to fill in content of root volume. If you have NFS/CIFS access you can in principle simply unpack installation image, it should be enough.
... View more
Assuming you performed 4(a) from special boot menu, you now need to update to the same version to fill in root volume content. After 4(a) root volume is empty.
... View more
As I mentioned already, just swapping boot media should be enough to get matching /var content. Otherwise look at boot media replacement document available on support portal for each filer model. It describes how to restore copy from root volume.
... View more
Sorry, I still do not understand what “indexing” means. If you perform snapmirror update after application has been cleanly stopped and host shutdown, copy should be good. If you are using mount points, just mount replicas in the same place. If you are using raw devices, it depends how application identifies them. The simplest case is if application scans for some signatures on device - then it should just work. If application is configured to use specific device names - those names will likely change, now it is application dependent how to make it use new devices. But from NetApp side final update after host has been shutdown normally should give you identical copy of data.
... View more
If your nodes have symmetrical hardware configuration (which they should have in HA pair) there should be no problem with interfaces. Not sure what "port destination" means. What I'm not sure about is FC target WWPNs; if you have them you may want to record original configuration and manually change WWPNs after reassignment. Yet another reason to have support case open to get support in case something turns out wrong.
Oh, and LUN serial numbers may change as well. Happened in the past during head swap. Same consideration - record original configuration (or have ASUP ready) and compare after reassignment, fix serial numbers if necessary.
As you effectively move system to another place, you may need to get in touch with your infrastructure team to let them update their inventory (what is connected where).
... View more
So you apparently have MetroCluster. In which case takeover after total site outage is exactly what it is intended for. Did you consider tie breaker to automate it? But if you insist ...
FAS8020 does not have separate NVRAM card, you probably confuse FC-VI for it. Part of configuration is stored in volatile /var which is backed up to boot medium and root volume. It is possible to either restore /var backup from new root or simply swap boot media between two nodes before reassigning disks. And yes, disk reassignment must be done in maintenance mode with both controllers down. I'd also clear mailboxes in maintenance mode to be on safe side (see any document on head swap).
You will also need to reinstall licenses from partner node as licenses are associated with system serial numbers and after disk swap each node will see licenses of partner. And I strongly recommend you to open support case and confirm that this will actually work, to be on safe side.
... View more
Well, in this case disable cf before exchanging disks (and reboot controllers after that to make sure they come as single nodes). When swapping disks do not unassign them, rather assign to some dummy sysid not equal to both controllers. This way you will always have overview which disks belong to which controller. After booting enable cf again.
... View more
I do not quite understand what “indexing” means here, but to perform switch you need to - stop applications that access old LUN, unmount file systems on these LUNs; if possible, shut down hosts to make sure LUNs are not changed any more - perform final snapmirror update; wait for it to complete - break mirrors, present new LUNs to hosts After that hosts should see exactly the same content on new LUNs. New LUNs will have new serial numbers (and UUIDs) so you may need to adjust your host configuration to reflect it.
... View more
Is it HA pair? In this case both controllers are identical, what is the reason to do it? Or do you have two independent single node filers?
... View more
I am pretty positive that multi-mode ifgrp won’t work (or better will impact FCoE part). Single node may work, but does not really offer you more than normal LIF failover does anyway.
... View more
OK, I see what you mean. But you cannot really replace FAS2020 controller with anything else - you would need to replace the whole enclosure together with HDDs. So I sort of presumed this is about controller replacement.
... View more
@maffo wrote:
Compact Flash ... moving it it's most likely impossible.
Come on, of course it is possible. Unforutnately after NetApp removed FAS2000 documentation from portal I cannot provide link to prove it, but you as internal should still be able to find it (I have local copy).
... View more
@maffo wrote:
you need to make sure the new controller runs the same version of ONTAP in the same mode.
Just move boot device from old to new controller, that's the most straightforward way.
... View more
Then simply follow controller replacement procedure available on support portal. In a nutshell, replace controller, boot into maintenance mode, reassign disks to new sysid. FAS2020 runs 7G so there are no issues with restoring boot media content.
... View more