I'm currently setting up a SnapMirror relationship and I noticed that my transfer was going extremely slow. Turns out the e0M interface is only 100mb.
I have a FAS2240-4 with two controllers running 8.1.4P1 7-Mode
Both controllers use this setup:
e0a and e0c are used for my iSCSI VIF
e0b and e0d are used for my NFS VIF
Both VIFs are configured to use lacp
e0m is my management nic
Right now, DNS points to the management IP for each server.
I also have Exchange data stored on this filer and SnapDrive on my Exchange servers configured to point to the management interface. I'm working with support to figure out why SME won't interface with SnapMirror but that's outside the scope of this discussion.
What are my options for getting my SnapMirror on a gigabit interface? Is it a good idea to piggyback on one of my VIFs.
This is where the documentation doesn't transfer to the real world.
Here's a little more information on my setup. Perhaps you guys can tell me how you would configure it so I'm getting the most out of these systems.
e0a and e0c connected to switch1 (2960-S). lacp is enabled and the port groups are configured for my iscsi vlan (exchange and file servers)
e0b and e0d connected to switch2 (2960-S). lacp is enabled and the port groups are configured for my nfs vlan (vmware)
e0m on both controllers are either connected to switch3 or switch4
switch1 and switch2 are disparate and do not have any connectivity to the public network aside from one management port.
switch3 and switch4 are my internal facing switches for my vm traffic, management, etc.
My second filer, my snapmirror destination filer, is configured the same way.
Should I talk to my network team about stacking switches 1 and 2? I could then use a 4-port lacp for e0a-e0d and use vlan tagging in my rc file. It wouldn't take much to add my management vlan to that switch.
Then configure the hosts file on each controller to point to either the nfs or the iscsi vif on my second filer?
Or would it be the same to just leave it as it is, and configure the hosts file for each filer to use the nfs or iscsi for all communication, including snapmirror transfers, leaving e0m for management. That would be the easiest and least disruptive.
Since you use vfilers, your setup is going to be a little different than mine.
You don't have any issues with pushing all that data through 2 ports? Is NetApp smart enough to monitor the bandwidth and switch ports if needed or do you run the risk of having iSCSI, NFS, backup, and your management all over one port? Would there be any benefit of adding a 3rd interface to the mix?
I'm thinking of putting my iSCSI, NFS, and management on one VIF that's comprised of 3 ports (e0a, b, and c)
Then utilizing e0d and going straight to e0d of my backup filer (completely bypassing the switch) for SnapMirror transfers.
This will require changing my switch's lacp groups and ports to trunk mode and tagging my netapp packets with the necessary vlan data like you're doing in your /etc/rc file. I never thought about enabling jumbo frames ... probably a good idea to configure my switches and vmware servers to utilize those.
I'm not looking forward to shutting everything down again (netapp filers and all of my 100+ VMs) and pulling an all-nighter to get this changed over but my snapmirror transfers are taking DAYS to update. 6 days for the baseline Exchange transfer PER DATABASE so that was a month-long process. And now that all that extra SnapMirror traffic (vmware, exchange, file server, data network traffic) is going over a 100 mb link I'm getting a lot of SnapMirror errors and timeouts. It has become a very unstable system.
For your snapmirror.conf file ... are you utilizing compression for your transfers?