Subscribe

snapprotect broken snapvault relationship

hi,

2 weeks ago our backup filer started refused to create a snapvault relationship for a new volume with this error message:

Error Code: [25:37] Description: Conformance status for Dataset id [545], Source Copy [33] is not conformant in DataFabric Manager. Conformance failure reason = [ ] Severity:[Error] Run Action:[None of the physical resources matched, so thin provision a new flexible volume (backup secondary) of size 1146527316 KB for qtree filer1:/vol_exch_data2/- into node 'Backup' and then attempt to create a backup relationship using SnapVault first, then try Qtree SnapMirror if SnapVault relationship creation fails.] Run Effect:[Provisioning a new flexible volume (backup secondary) failed.] Run Reason:[ Storage system : 'filer2l'(407): Aggregate : 'filer2:aggr0'(408): - Nearly full threshold of the aggregate will exceed: 'filer2:aggr0'(408)[Used space grows to: 84.2925 % (33.1 TB), Nearly full threshold: 80 % (31.4 TB)] ] Run Suggestion:[ Suggestions related to storage system 'filer2l'(407): Suggestions related to aggreg

so we went ahead and ordered a new shelf to extend the aggregate on filer2. This has been delivered today and I see the free space.

But ..., our production filer kept on growing (this volume hosts a LUN used by MS Exchange). We are running quite tight on storage on filer1 as well so we have been deleting snapshots on that volume on filer1. Now when I try to run an auxiliary copy, it hangs on the 'pending' state.

I have noticed that the snapshots snapprotect creates during the primary classic  have as dependency 'none whereas other snapprotect backups that are working have a dependency 'snapvault'. In the snapvault list I see the exchange volumes are no longer listed, so it looks like the snapvault relationship has been deleted.

How can I restore this relationship?

Re: snapprotect broken snapvault relationship

If it's a new volume added to the relationship that has never baselined you can adjust the threshold from 80% to 85%(that would put you just above the 84.29%)  to allow the replication to continue until you had more storage (on the secondary).

Depending on which snapshots were deleted corresponding jobs may need to be "not picked" so that the Auxcopy process does not try to copy them anymore.

Relationships don't get deleted, but it would be best to investigate further with a technical support case. In a scenario like this, the full sendlogs, DFMDC and Autosupports from the primary and secondary nodes will be needed.