Data Backup and Recovery

SnapProtect, SnapVault secondary volume size


Hi all,


in the old world, when I used SnapVault via Protection manager, the destination volume would have been resized by dfpm.


We are in the process of migrating our backup to SnapProtect.


How are secondary volume sizes handled? Do I have to resize manually, or is this done by SP?


Thanks for any info





SnapProtect works with DFM to handle the provisioning / re-sizing of the vault targets.





Right, I can see that SP creates a destination volume set to 70TB grow-shrink mode.
Though, after the first backup the volume is downsized and the backups after that stop with volume full errors without further autosizing.
Maybe I missed some setting!?


Do you see anything on the vault logs about volume resize attempts? I had a look at the provisioning policy in use at my shop but there doesn't seem to be a setting to enable or disable resizing.




all I could find was a message in SP's job manager (No space left on device):


3748 a30 07/23 03:07:30 2201 Scheduler Set pending cause [DataFabric Manager Backup Job id [9abdbbcbe8f6ed2e:-454c5e88:14eb8365c1b:-22e0] did not complete successfully for Dataset id [9cbbf396-4214-4295-8727-50999bfad63a:app_type=OCUM,type=storage_service_subscription,uuid=5e7a97ce-a169-4028-808a-0dec2b713315], Source Copy [7]. DFM Job Status = [Transfer operation for relationship 'qbit_svm:qbit_vol_008_nfs_vmwa_vmw->qbit_svm_bkp:CC_qspcomserv_SP_5_Copy_8_qbit_svm_qbit_vol_008_nfs_vmwa_vmw_Backup' ended unsuccessfully. [Data ONTAP reported] Transfer failed. (Volume access error (No space left on device)).].]::Client [qspagent01] Application [AuxCopy] Message Id [419430437] RCID [0] ReservationId [0]. Level [0] flags [0] id [0] overwrite [0] append [1] CustId[0].


No mention of resizing, though...


At the moment I am trying to figure out how to get snapvault logs from the filer...


You might try changing the "Autosize Grow Threshold Percentage:" and the "-Autosize Increment (for flexvols only):" settings for the volume.

I set mine to Grow at 85% and in increments of 100GB.  The defaults were not working for me.  It seems the volume would fill up faster than it could trigger an autosize.


> vol modify -volume <volume_name> -autosize-increment 100GB

> vol modify -volume <volume_name> -autosize-grow-threshold-percent 85