Data Backup and Recovery

Failed to get volume mount point from file path - Snap Manager for SQL Server

weaverrw
3,585 Views

I have a three node Windows 2012 cluster. Nodes 1 and 2 host one SQL Server Instance (INSTANCE A) and Node 3 hosts a seperate SQL Server Instance (INSTANCE B).

 

Snap Manager for SQL Server 7.2 is installed.

 

Whenever I try to run a backup on INSTANCE A for example, I get the following errors:

 

[13:10:12.902]  [SERVER] Getting SnapInfo directories configuration...

[13:10:12.908]  [SERVER] Error Code: 0x80070002
The system cannot find the file specified.

[13:10:12.908]  [SERVER] Failed to get volume mount point from file path. Using default drive...

[13:10:12.908]  [SERVER] Error Code: 0x80070002
The system cannot find the file specified.

 

[13:10:34.342]  [SERVER] Failed to create SnapInfo Directory.
[13:10:34.342]  [SERVER] Failed to get SnapInfo directory from registry, please re-run the Configuration Wizard.

 

If I run the Configuration Wizard against INSTANCE A, the backup runs successfully. But then if I try to run a backup against INSTANCE B, I get the same SnapInfo Directory error. If I run the configuration wizard against INSTANCE B, the backup succeeds but then I get the same error on INSTANCE A.

 

Can you please provide guidance as to how to fix this issue? I'm stumped.

 

 

 

2 REPLIES 2

georgevj
3,510 Views

Did you try "Moving multiple SnapInfo directories to a single SnapInfo directory" from page number 57 of the SMSQL Admin guide available at https://library.netapp.com/ecm/ecm_download_file/ECMP11658050 ?

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
Cannot find the answer you need? No need to open a support case - just CHAT and we’ll handle it for you.

weaverrw
3,502 Views

Thanks for your response. I'm unable to move them to a single SnapInfo directory because even though it's a single cluster, INSTANCE1 has storage that is only visible from nodes 1 and 2. And INSTANCE2 has storage that's only visible from node 3. This is by design.

 

We have AlwaysOn configured where INSTANCE1 is the primary replica and INSTANCE2 is the secondary replica.

Public