Data Backup and Recovery

SMSQL snapinfo is not being snapvault updated




I am using SnapManager for SQL 7.2, Clustered ONTAP 8.3 and SnapDrive 7.1.1.

Everything is setup and working fine, except for snapinfo snapvault update.
The snapinfo volume is not being snapvault updated after a backup is done.
The data volumes and log volumes are being snapvault updated.


In the previous 7-mode setup with DFM/protection manager and datasets, snapinfo was replicated.

I guess that snapinfo still should/needs to be replicated.


Any suggestions?






yes, SMSQL should trigger the update of the Snapinfo directory too.

I would check if there is any difference in the location of the SnapInfo directory, compared to the one for database and log files.

Also, ensure that SMSQL is aware of the correct snapinfo directory location, by running the configuration wizard again.


Here is a general step by step SnapMirror XDP setup procedure you can check for the snapinfo directory, to see if you missed anything:


0. Assuming you have already created source volumes and you have already setup lun’s and databases within those, Firstly, on your destination vserver, you need to create a destination volume of type DP, then setup SnapMirror XDP; here are some examples:
Vserver_dest::> vol create -volume vol_dest -aggregate aggrdata_2 -size 2g -state online -type DP -policy default -autosize-mode grow_shrink -space-guarantee volume -snapshot-policy none -foreground true


1. Create the snapmirror policy, as you are not going to use the default XDPDefault which we then need to assign to the snapmirror relationship:

Vserver_source::> snapmirror policy create -policy PolicymirrorStoD -tries 8 -transfer-priority normal -ignore-atime false -restart always -comment "this is a test SnapMirror Policy"

Vserver_source::> snapmirror policy add-rule -policy PolicymirrorStoD -snapmirror-label Daily -keep 21

Alternatively, you can also do this via the GUI, as shown in the below example:



2. Verify that the policy has been created successfully on source vserver:

Vserver_source::> snapmirror policy show -policy PolicymirrorStoD

Vserver: vserver_source
SnapMirror Policy Name: PolicymirrorStoD
Policy Owner: vserver-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Create Snapshot: false
Comment: -
Total Number of Rules: 1
Total Keep: 100
Rules: Snapmirror-label Keep Preserve Warn
-------------------------------- ---- -------- ----
Daily 100 false 0


3. Create snapmirror policy on secondary too (steps 1 and 2), or else when you later will run snapmirror create -vserver vserver_dest, you’ll get “Error: command failed: Policy lookup for " PolicymirrorStoD" failed”.


4. Create a volume snapshot policy: (By Design, you also need to associate the volume snapshot policy with the snapmirror labels created within the SnapMirror Policy)
Vserver_source::> volume snapshot policy create -policy SnapPolkeep2snaps -enabled true -schedule1 daily -count1 2 -snapmirror-label1 Daily

Vserver_source::> volume snapshot policy show -policy SnapPolkeep2snaps

Vserver: Vserver_source
Snapshot Policy Name: SnapPolkeep2snaps
Snapshot Policy Enabled: true
Policy Owner: vserver-admin
Comment: -
Total Number of Schedules: 1
Schedule Count Prefix SnapMirror Label
---------------------- ----- --------------------- -------------------
daily 2 daily Daily

5. Associate the PRIMARY volumes involved in the snapvault with the new volume snapshot policy, which is linked to the snapmirror label(s):
Vserver_source::> volume modify -volume vol_source -snapshot-policy SnapPolkeep2snaps

Warning: You are changing the Snapshot policy on volume vol_source to SnapPolkeep2snaps. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot
Do you want to continue? {y|n}: y

Volume modify successful on volume: vol_source


Same for the second volume involved:

Vserver_source::> vol show -volume vol_source -fields snapshot-policy
(volume show)
vserver volume snapshot-policy
----------- -------------- ---------------
Vserver_source vol_source SnapPolkeep2snaps



6. create your first snapshot with the newly created label:
Vserver_source::> snapshot create -volume vol_source -snapshot testfornewlabel -foreground true -snapmirror-label Daily
Vserver_source::> snapshot create -volume vollogs_source -snapshot testfornewlabel -foreground true -snapmirror-label Daily

7. create and initialize snapmirror:
Vserver_dest::> snapmirror create -source-path Vserver_source:vollogs_source -destination-path Vserver_dest:vollogs_dest -type XDP -vserver Vserver_dest -throttle unlimited -policy PolicymirrorStoD
Vserver_dest::> snapmirror initialize -destination-path Vserver_dest:vollogs_dest -source-path Vserver_source:vollogs_source -type XDP

do this for every volume you want to snapvault.

On the destination VSM, Check that initialization has completed successfully, if not, repeat the step 0 and fix the problem:
Vserver_dest::> snapmirror show -destination-path Vserver_dest:vollogs_dest -source-path Vserver_source:vollogs_source


8. Now, on the secondary vserver, associate the snapmirror relationship (for each volume) to the newly created snapmirror policy:

Vserver_dest::> snapmirror modify -destination-path Vserver_dest:vol_dest-policy PolicymirrorStoD


9. When the first transfer is complete, you need to run SMSQL configuration wizard (if not done yet) and then setup a backup job from within SMSQL:


As you can see, SMSQL shows you multiple possible snapmirror labels which are pre-selectable; this means you should only create snapmirror labels with those names, respecting the case sensitivity.


10. now, SMSQL should retain 3 snapshots on primary and Ontap should ensure the Daily label is applied with 21 snapshots to be retained on the secondary volumes.



hope that helps,


Domenico Di Mauro



Hi Domenico,


Thanks for your input.


The configurations wizard has been rerun.

About the snapinfo directory, the layout is as follows.
The snapinfo directory is located in a seperate lun in a seperate volume:
snapinfo.lun -> snapinfo_volume
log.lun -> log_volume
data.lun -> data_volume


The log_volume and data_volume are replicated (snapvault updated).


Are you saying that snapinfo should be placed differently?


In 7-mode it was best practice to place snapinfo in its only volume.




Hi Simon,

no, that's fine as you have it configured. I was just trying to compare my script with your config.

all you need to do is ensure you have created and configured a snapmirror XDP(?) relationship ALSO for the volume hosting the snapinfo, hence my ask about the snapinfo directory location.


are you backing up multiple SQL instances in one go? That is supposed to work too, but I ask just to narrow the problem down. If so, does it work when you backup one instance at the time?