Subscribe

Snapvault transfer speed

[ Edited ]

Hello,

I've got a question about the transfer speeds of snapvault. I've read a couple of similar discussions on the forum but none of the discussions has been answered.

Problem I'm having is that snapvault local is done at the speed I expect. So the initial transfer between filer1 and filer2 is as it should. Then we placed filer2 at another location. Snapvault then doesn't reach the speed we think it should.

I don't know how to check the snapvault speed while a snapvault job is running, but based on the amount of data being transfered it is way to slow.

We had to change the IP of the filer. Names and IP's are pingable and the response time doesn't differ much from internal. We even tried changing MTU size. We don't use any throttleing. That option is off. There is no traffic shapeing set in the firewall a both ends.

Anyone a idea why the transfer is not a the maximum speed and how to change that?

Re: Snapvault transfer speed

Hi,

I just thought I'd try to take a poke at this.  I run a number of OSSV and SV jobs and haven't really seen any issues.  There are probably a few things that you are going to need to add if anyone is really going to be able to give you any real help.

1. Some idea of how the "new", non-local network connections look: switches, network, firewalls, bandwidth (amount, dedicated/shared, etc).

2. Some idea of what the loads are on source and destination filers and basically what sort of data is being moved and how "static" it is.

3. A quick output of 'snapvault status -c'  (cleaned for public consumption).

Very long distance transfers may have an advantage if increasing the window size for snapmirror transfers (options snapmirror) which basically _should_ also affect snapvault (they are essentially the same thing).  Most of this is covered in the Network Guide in the System Administration docs. You can find a formula for calculating the window size there. The default setting works for most situations, however.

Changing the MTU is most likely going to get you negative results if you don't have total control of the connecting networks.  Not a good idea generally for non-LAN setups.

Incremental updates are generally going to be slower than the initial full transfer.  It is just more work for the filer to walk through the filesystem and dig out the blocks.  This isn't Volume Snapmirror.

A true test would be to test transfers with a similar data change pattern on 2 systems located locally to see if you can either eliminate the filer capacity or network capacity generally as problem sources.

If you have sufficient network knowledge, a pktt output viewed with tcpdump or ethereal/wireshark might also get you more information.

Some simple checks for duplex-mismatches might be a good idea as well.  Gigabit Ethernet should always be set to "auto" as the spec. requires.

Re: Snapvault transfer speed

Solved by switching router from auto neg. to full duplex.