Data Backup and Recovery Discussions

SnapDrive Cannot Start SnapMirror Job



I am unable to configure SnapDrive to kick off SnapMirror jobs after creating snapshots, and would love any suggestions.  Here are the details:


We are just starting to use Cluster Mode.  I have SnapDrive running on a Windows system, connected to LUNs on an SVM in the cluster, with DataOnTAP DSM for MPIO installed and working.  I can create a snapshot without a problem, and I can manually run SnapMirror jobs without a problem.  However, if I try to kick off SnapMirror through SnapDrive, I get an error message.  I should note that networking for our SVM's at our DR facility is not active, because IP's and VLANs can change between DR tests.  The only active LIFs are the cluster management, node management, and replication LIFs.  No management or data LIFs are active for the SVMs.  The SVMs only exist because they are required for SnapMirror to work at all.


Is there a way to give SnapDrive permission to kick off SnapMirror without enabling networking on the SVM we have set up for iSCSI?  I've enabled vsadmin on the SVM just for grins, and created a matching admin account in DR at the cluster and SVM level to match the account used in production for the Transport Protocol (HTTPS) and still no go.  Any ideas?


Re: SnapDrive Cannot Start SnapMirror Job


>> Is there a way to give SnapDrive permission to kick off SnapMirror without enabling networking on the SVM we have set up for iSCSI?


Hi, no. In order for SnapDrive to run snapmirror commands towards the Storage Virtual Machines (SVM), SnapDrive needs to be able to communicate via TCP/IP with the Source and Destination SVM's.

there are also other requirements, like SnapMiroror setup by SVM names and not ip'address, etc.

you will find the full list on page 74 and 75 of the SnapDrive 7.1 Administration Guide (Requirements for using SnapMirror with SnapDrive for Windows).


hope that helps,



View solution in original post

Re: SnapDrive Cannot Start SnapMirror Job


Thank you dmauro for this feedback.  In our 7-mode system we only had to add the local user account we used for Transport Protocol to the DR filers and it worked.  Looks like we have more work to do in cdot.


In reading over this page I have a couple of follow up questions:


1. Regarding this quote --


You must create your SnapMirror relationship using storage system names (either the fully

qualified DNS name or the storage system name alone), and the name of the network interface to

be used for SnapMirror transfers (for example, storage1-e0), not IP addresses.


-- I don't understand how the network interface name can be used as part of the storage system name.  I would think you would just enter the SVM and volume name on either end.  In System Manager, I see no way to append the network interface name to the SVM and/or volume.  I have other SnapMirror jobs already set up that don't use SnapDrive and they work find without listing the network interface in the name.  Any ideas here?


2. Regarding this quote --


The source and destination storage systems must be configured to grant root access to the

Windows domain account used by the SnapDrive service.

That is, the wafl.map_nt_admin_priv_to_root option must be set to On. For information

about enabling storage system options, see your Data ONTAP documentation.


-- Is this a 7-mode option?  I cannot find the wafl.map_nt_admin_priv_to_root option on my system, which is Cdot 8.3.

Re: SnapDrive Cannot Start SnapMirror Job


yes, sorry, those two ones apply to 7mode only. Since you have Clustered Data Ontap, you just need to ensure you have enough permissions on source and destination VSM;

for instance, you can simply try passing SnapDrive the user "vsadmin" for both source and destination VSM's.

When configuring snapmirror with Clustered Data Ontap, the CLI would ask you to specify the Vserver name: you could ensure that the vserver DNS name is registered in DNS with the management LIF of the Vserver, though.


in case you still need to setu it up, I have made a list of steps you would typically follow for Snapmirror XDP (snapvault), in case it will turn useful:


0. Assuming you have already created source volumes and you have already setup lun’s and databases within those, Firstly, on your destination vserver, you need to create a destination volume of type DP, then setup SnapMirror XDP; here are some examples:
Vserver_dest::> vol create -volume vol_dest -aggregate aggrdata_2 -size 2g -state online -type DP -policy default -autosize-mode grow_shrink -space-guarantee volume -snapshot-policy none -foreground true


1. Create the snapmirror policy, as you are not going to use the default XDPDefault which we then need to assign to the snapmirror relationship:

Vserver_source::> snapmirror policy create -policy PolicymirrorStoD -tries 8 -transfer-priority normal -ignore-atime false -restart always -comment "this is a test SnapMirror Policy"

Vserver_source::> snapmirror policy add-rule -policy PolicymirrorStoD -snapmirror-label Daily -keep 21

Alternatively, you can also do this via the GUI, as shown in the below example:



2. Verify that the policy has been created successfully on source vserver:

Vserver_source::> snapmirror policy show -policy PolicymirrorStoD

Vserver: vserver_source
SnapMirror Policy Name: PolicymirrorStoD
Policy Owner: vserver-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Create Snapshot: false
Comment: -
Total Number of Rules: 1
Total Keep: 100
Rules: Snapmirror-label Keep Preserve Warn
-------------------------------- ---- -------- ----
Daily 100 false 0


3. Create snapmirror policy on secondary too (steps 1 and 2), or else when you later will run snapmirror create -vserver vserver_dest, you’ll get “Error: command failed: Policy lookup for " PolicymirrorStoD" failed”.


4. Create a volume snapshot policy: (By Design, you also need to associate the volume snapshot policy with the snapmirror labels created within the SnapMirror Policy)
Vserver_source::> volume snapshot policy create -policy SnapPolkeep2snaps -enabled true -schedule1 daily -count1 2 -snapmirror-label1 Daily

Vserver_source::> volume snapshot policy show -policy SnapPolkeep2snaps

Vserver: Vserver_source
Snapshot Policy Name: SnapPolkeep2snaps
Snapshot Policy Enabled: true
Policy Owner: vserver-admin
Comment: -
Total Number of Schedules: 1
Schedule Count Prefix SnapMirror Label
---------------------- ----- --------------------- -------------------
daily 2 daily Daily

5. Associate the PRIMARY volumes involved in the snapvault with the new volume snapshot policy, which is linked to the snapmirror label(s):
Vserver_source::> volume modify -volume vol_source -snapshot-policy SnapPolkeep2snaps

Warning: You are changing the Snapshot policy on volume vol_source to SnapPolkeep2snaps. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot
Do you want to continue? {y|n}: y

Volume modify successful on volume: vol_source


Same for the second volume involved:

Vserver_source::> vol show -volume vol_source -fields snapshot-policy
(volume show)
vserver volume snapshot-policy
----------- -------------- ---------------
Vserver_source vol_source SnapPolkeep2snaps



6. create your first snapshot with the newly created label:
Vserver_source::> snapshot create -volume vol_source -snapshot testfornewlabel -foreground true -snapmirror-label Daily
Vserver_source::> snapshot create -volume vollogs_source -snapshot testfornewlabel -foreground true -snapmirror-label Daily

7. create and initialize snapmirror:
Vserver_dest::> snapmirror create -source-path Vserver_source:vollogs_source -destination-path Vserver_dest:vollogs_dest -type XDP -vserver Vserver_dest -throttle unlimited -policy PolicymirrorStoD
Vserver_dest::> snapmirror initialize -destination-path Vserver_dest:vollogs_dest -source-path Vserver_source:vollogs_source -type XDP

do this for every volume you want to snapvault.

On the destination VSM, Check that initialization has completed successfully, if not, repeat the step 0 and fix the problem:
Vserver_dest::> snapmirror show -destination-path Vserver_dest:vollogs_dest -source-path Vserver_source:vollogs_source


8. Now, on the secondary vserver, associate the snapmirror relationship (for each volume) to the newly created snapmirror policy:

Vserver_dest::> snapmirror modify -destination-path Vserver_dest:vol_dest-policy PolicymirrorStoD


9. When the first transfer is complete, you need to run SMSQL configuration wizard (if not done yet) and then setup a backup job from within SMSQL:


As you can see, SMSQL shows you multiple possible snapmirror labels which are pre-selectable; this means you should only create snapmirror labels with those names, respecting the case sensitivity.


10. now, SMSQL should retain 3 snapshots on primary and Ontap should ensure the Daily label is applied with 21 snapshots to be retained on the secondary volumes.



hope that helps,

Domenico Di Mauro.

Re: SnapDrive Cannot Start SnapMirror Job


Thanks dmauro!

Earn Rewards for Your Review!
GPI Review Banner
All Community Forums