Configuring SnapVault

Hi all,

I'm trying to setup SnapVault for the first time and having a number of difficulties. Here is the setup:

FAS2040 HA - both controlers used for NFS based VMware Data Stores. A volume on each called vol_vMotion01 and vol_vMotion02 respectively. Both configured with Production VLAN on e0a, Backup VLAN on e0b and a vif of e0c and e0d for Storage VLAN. Both will be SV primaries.

FAS2020 - SV secondary, e0a Production VLAN, e0b on Backup VLAN, a volume called vol_sv for SnapVault data

I've enabled SV on all devcies and on both 2040 controllers I have set options snapvault.access host=FAS2020 AND if=e0b. On FAS2020 I have added host entries for FAS2040 controllers for IP addresses on Backup VLAN as I want to ensure that the all SnapVault traffic goes over the Backup VLAN.

I've used the snapvault start command and intially get the "Snapvault configuration for the qtree has been set" confirmation but then get a "cannot connect to source filer. Transfer not initially successful, retrying" error.

I have added host entries on each filer. What else can I do to troubleshoot this?

Re: Configuring SnapVault

I think the issue must be caused by network conenctivity as I changed the snapvault.access to all and stipulated the Production VLAN addresses of the sources and it worked fine.

I have another question about the use of Qtrees when using SnapVault. Is it essential to use them? Currently I have a volume called vol_vMotion01 and have mapped that volume to each of the ESX servers via NFS as <ip Address> /vol/vol_vMotion01/. I have then configured SnapVault as snapvault start -S /vol/vol_sv/vMotion01. Notice I'm not specifying a Qtree as the source, just a volume, although there is self generated Qtree called vol_vMotion01. Will that cause any problems should I need to restore any snaps?

Re: Configuring SnapVault

Just to confirm snapvault is working and you have tested the destination is good with a restore or mounting the data in a flexclone.  I have created 'working' snapvaults in the past which contain no recoverable data, so a restore test is HIGHLY recommended before you release the system to production.

Snapmirror and snapvault are the same technology but snapmirror is based on volumes and the inodes are kept the same on both sides of the mirror.  This is why they must be the same number of snapshots on both sides of the mirror.

Snapvault is based on qtree's.  The data is moved via the snapmirror process based on the baseline snapshot, which is common to both filers.  Once the snapmirror replication has finished, the destination filer will create a snapvault snapshot of the destination qtree, ie files and directories and it is no longer tied to the source inodes.  Releasing the oldest baseline snapshot.  This is why there can be more snapshots and data on the destination volume.  (Sorry description not technically correct but gives you the right idea).

So yes there must be qtree's at the destination in a snapvault environment.

Hope it helps


PS - This has loads of great information in it.

Re: Configuring SnapVault


Try the command: "options snapvault.access all" on the snapvault destination filer and see if that helps


Re: Configuring SnapVault

So it looks like you're trying to implement Volume-to-Qtree (or Whole Volume) SnapVault.  This will most likely cause issues for your restore operations since you must also restore to a qtree, meaning you're going to cram an entire volume (that's in a qtree on the destination) into a new qtree on the source.  This will make all qtrees (if they existed) a directory on the destination, and place all your data one level lower on the source (/vol/vol_name/qtree/dir, instead of /vol/vol_name/dir - which was probably the original format).  In addition, you will not be able to monitor and/or manage the transfers with Protection Manager.  Generally this configuration isn't recommneded for those reasons.

Configuring SnapVault

Sorry to dig up an old thread, but I’m having exactly the same issue as you state in the first post, although i have already tried setting snapvault.access to all. I’m using multistore which is the only difference.

What have i done?

snapvault.access all    on ever filer and vfiler (ill lock down with host=xxx when its working)

on filer1 i have added to vfiler0 and the vfiler the below host entry. Filer2-Backup-Vlan.

I have added the below to Filer2

333.333.333.333 Vfiler-Backup-Vlan

I have added a new route on filer1 and filer2 so all traffic uses the backup vlan on both filers.

I just cant think what could be wrong, anything else I can check?

Any help would be greatly appreciated.