ONTAP Discussions
ONTAP Discussions
Just because we had an oddball experience I'd like to know if its outside of what we should be expecting.
I have a volume I set up for snapmirror, just to test my understanding of the process, and it works. Its simple 100mb and connected across a 100mbps wan link throttled on the netapp to 4096kbps. The synch is scheduled to happen once a day and the volume itself isn't attached to anything, nor does it contain any data.
We also have a number of other snapmirrors in place, most of them much more sizeable because they happen to have content.
This is where things aren't looking like I expect them to:
Tiny Mirror | Big Mirror |
---|---|
Size: 100mb | Size: 1TB |
Schedule: midnight | Schedule: Hourly |
State: snapmirrored | State: Snapmirrored |
Last Transfer Size: 68Kb | Last Transfer Size: 504416 Kb |
Last Transfer Duration: 116 | Last Transfer duration: 1149 |
Transfer Rate: 4096kb/s | Transfer Rate: 512kb/s |
Should a snapmirror with 8 times greater maximum speed, transfering 0.02% the volume be expected to take 10% of the time?
And since there is NO information within the volume where did it get the 68Kb to transmit?
Solved! See The Solution
Independant source with an answer:
The 64kb is indeed the size generated by for the metadata which does have to be replicated.
The time involved indicates not just the amount of time to transfer the data but also the amount of time needed to pad out the data to the 2MB block size that snapmirror requires to replicate.
Of course this begs the question about why Netapp doesn't engineer a sideband for replicating metadata and leave the snapmirror replication for the actual changed blocks but then this is simpler and reduces points of failure. six of one a quarter of two-dozen of the other.
I have two ideas....
1. The 68Kb are inode and metadata stuff it needs to transfer to get the FileSystem on the destination to the same level as on the source, and 68Kb can be disregarded, or not?
2. The duration can be influenced by other stuff going on on the WAN.
Hope this helps,
Peter
Independant source with an answer:
The 64kb is indeed the size generated by for the metadata which does have to be replicated.
The time involved indicates not just the amount of time to transfer the data but also the amount of time needed to pad out the data to the 2MB block size that snapmirror requires to replicate.
Of course this begs the question about why Netapp doesn't engineer a sideband for replicating metadata and leave the snapmirror replication for the actual changed blocks but then this is simpler and reduces points of failure. six of one a quarter of two-dozen of the other.