Data Backup and Recovery

SMSQL Snapmirror: "Another Transfer is in progress"

mcrossley
5,967 Views

Hi all,

 

I just recently migrated this volume to our new CDOT system, and although the SMSQL just are running, I am finding that it takes nearly an hour for the job to complete because of errors about "Another transfer is in progress". The volume is vaulted to our remote system, with no schedule 

 

Running the job, I check oncommand and CLI via snapmirror list-destinations, and although there is another volume that is currently transferring (that's not related to this SQL DB), it continuously compains about another transfer in progress.

 

Here's some output of the SMSQL backup log:


[09:50:24.557] [AGV-SUNDB] Starting SDAPI snapshot...
[09:50:24.557] [AGV-SUNDB] Preparing LUN 'J:\', for SDAPI operation...
[09:50:30.971] [AGV-SUNDB] Snapshot drive/share(s) successfully completed.
[09:50:30.975] [AGV-SUNDB] No database associated with an enabled data set archive backup was selected.
[09:50:30.975] [AGV-SUNDB] Starting to prepare the list of volumes (which has snapvault relationship) and corresponding storage details
[09:50:30.975] [AGV-SUNDB] Preparation of list of volumes (which has snapvault relationship) and corresponding storage details succeeded
[09:50:30.975] [AGV-SUNDB] Starting to transfer the snapshot on secondary
[09:50:31.214] [AGV-SUNDB] Starting to initiate SnapVault update ...
[09:50:32.881] [AGV-SUNDB] Another update is in progress...
[09:50:32.882] [AGV-SUNDB] Retrying to transfer snapshot: [sqlsnap__agv-sundb_02-17-2015_09.49.42__daily] again....
[09:50:39.664] [AGV-SUNDB] Another update is in progress...
[09:50:39.665] [AGV-SUNDB] Retrying to transfer snapshot: [sqlsnap__agv-sundb_02-17-2015_09.49.42__daily] again....
[09:50:46.583] [AGV-SUNDB] Another update is in progress...
[09:50:46.584] [AGV-SUNDB] Retrying to transfer snapshot: [sqlsnap__agv-sundb_02-17-2015_09.49.42__daily] again....
[09:50:53.174] [AGV-SUNDB] Another update is in progress...
[09:50:53.175] [AGV-SUNDB] Retrying to transfer snapshot: [sqlsnap__agv-sundb_02-17-2015_09.49.42__daily] again....
[09:50:59.805] [AGV-SUNDB] Another update is in progress...
[09:50:59.806] [AGV-SUNDB] Retrying to transfer snapshot: [sqlsnap__agv-sundb_02-17-2015_09.49.42__daily] again....

...

This goes on for a while and finally completes with:

[10:28:07.153] [AGV-SUNDB] Another update is in progress...
[10:28:07.154] [AGV-SUNDB] Retrying to transfer snapshot: [sqlsnap__agv-sundb_02-17-2015_09.49.42__daily] again....
[10:28:14.987] [AGV-SUNDB] SnapVault update of snapshot from primary to secondary initiated successfully
[10:28:14.988] [AGV-SUNDB] Starting to write snapvault info SIF
[10:28:14.988] [AGV-SUNDB] Prepare to save sanpvault info to SnapInfo file...

 

 

 

 

In the windows Application logs I see lots of these errors:

You have encountered an error updating your mirror sqlsnap__agv-sundb_02-17-2015_09.49.42__daily on storage system NTAP-YVR01-DATA01 volume SQL_prod_sundb_data.
Error code : 102
Details : Your SnapMirror update operation failed.
Failed to update SnapMirror relationship.
Command execution stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Another transfer is in progress.


Check the Windows Event Log and SnapDrive logs for more details.
Possible solutions :
1. Make sure that the specified volume and Snapshot copy name are correct.
2. Verify that the SnapMirror relationship is configured properly and not broken.
3. Verify that both the source and destination storage systems are registered.

 

 

This happens for each volume (Data, Logs, Snapinfo). I *did* have a schedule on the vault relationship, but I've since removed that. 

 

The amount of time it takes for each volume is not consistent either. Sometimes it has to try 5-6 times, other times, it takes 34 minutes...

 

Anyone else seen this?

 

4 REPLIES 4

wodgersmith
5,684 Views

I have the same issue, did you find a resoluion to this?

netapp58
5,454 Views

I have the same error message. Have you got an answer from the initial poster or found a solution by yourself?

Thank you for your support 🙂

wodgersmith
5,450 Views

Never got a reply but did get it resolved after logging a case with NetApp.

 

For us it was because we had added both the cluster admin address and vserver address to SnapDrive transport protocol settings. We removed the cluster admin and left just the vserver address and the issue was resolved.

netapp58
5,089 Views

The issue on our side was the following:

We had two backup jobs within SnapManager for two volumes. But the customer had moved one VM to the other volume. So instead of doing a SM update for just one volume, the first job was doing an update for both volumes. And therefore both volumes were locked when the second job wanted to update the second volume... :S

Took me some time to get this sorted out because I did not expect that.

Maybe this is helpful for other folks.

Public