Subscribe

dfpm dataset lag error after aggregate migration

Hello Community

 

i recently migrated (due to lack of space) some of my biggest datasets from one aggregate to another on the secondary storage. the dataset consist of 4 snapvault relationships which reside in one volume (non-qtree + 3 qtree)

everything went fine. also the dataset still back's up correctly and the retention ist correct.

nevertheless i now have a lag error because the dataset now belives that there are 8 relationships.

see the attached screenshot for details.

 

also on the filer's (primary and secondary) i only see the correct 4 relationships:

 

ssh phenix.bfh.ch 'snapvault status' | grep ti_data
duffle.bfh.ch:/vol/ti_data_01_v0_sata_enge/- phenix:/vol/sv_ti_data_01_v0_sata_enge_1/DS_ti_data_01_duffle_ti_data_01_v0_sata_enge Snapvaulted 11:21:55 Idle
duffle.bfh.ch:/vol/ti_data_01_v0_sata_enge/HuCE phenix:/vol/sv_ti_data_01_v0_sata_enge_1/HuCE Snapvaulted 11:21:55 Idle
duffle.bfh.ch:/vol/ti_data_01_v0_sata_enge/I3S phenix:/vol/sv_ti_data_01_v0_sata_enge_1/I3S Snapvaulted 11:21:55 Idle
duffle.bfh.ch:/vol/ti_data_01_v0_sata_enge/ahb-ti phenix:/vol/sv_ti_data_01_v0_sata_enge_1/ahb-ti Snapvaulted 11:21:55 Idle

 

ssh duffle.bfh.ch 'snapvault status' | grep ti_data
duffle:/vol/ti_data_01_v0_sata_enge/- phenix:/vol/sv_ti_data_01_v0_sata_enge_1/DS_ti_data_01_duffle_ti_data_01_v0_sata_enge Source 11:22:17 Idle
duffle:/vol/ti_data_01_v0_sata_enge/HuCE phenix:/vol/sv_ti_data_01_v0_sata_enge_1/HuCE Source 11:22:17 Idle
duffle:/vol/ti_data_01_v0_sata_enge/I3S phenix:/vol/sv_ti_data_01_v0_sata_enge_1/I3S Source 11:22:17 Idle
duffle:/vol/ti_data_01_v0_sata_enge/ahb-ti phenix:/vol/sv_ti_data_01_v0_sata_enge_1/ahb-ti Source 11:22:17 Idle

 

does anyone have an idea how to remove the ghost relationships?

 

regards

philipp

Re: dfpm dataset lag error after aggregate migration

Hi Philipp,

 

may I ask how you migrated the secondary volume?

 

a) did you use the "magage space" wizard of the NetApp Managament Console (the Protection Manager GUI), or

b) did you move the volumes manually by using SnapMirror?

 

if a) everything should be OK as Protection manager itself takes care of migrating all the relationships alongside the volumes(s).

 

The Manage Space wizard:

 

ms1.png

 

ms2.png

 

 

 

In case of b) you definiteley have some work to do...

 

Protection Manager really does not like if one performs manual changes. The new destination volume you might have created is discovered as a new volume by Operations Manager. When you then subsequently SnapMirror the old volume to the new, along with its SnapVault Snapshots, Operations Manager then discovers a new SnapMirror relationship *and* a new SnapVault relationship, but from its point of view those have nothing to do with the dataset. If you then even subsequently delete the old volume(s), Protection Manager might even go along and create new volumes on its own, including a SV baseline as you just ripped away the volumes from under its feet.

 

So b) will now need some manual clean-up. Basically you need to remove the relationships from Protection Manager control. This will mark the relationship as "external". Afterwards you can import this relationship into the dataset. Unfortunately with this procedure you will most likely lose all existing backups. The Snapshots will still be there, but Protection Manager will not know about them anymore. Also the old ones would not be automatically deleted and you need to check manually.

 

For either case I suggest to call support. In case of a) for whatever reason the process might not have completed successfully, in case of b) you need help for the manual clean-up.

 

regards, Niels

 

---------------------------

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO

Re: dfpm dataset lag error after aggregate migration

Hi Niels

 

Sorry for not mentioning it.

I used the manage space wizard. everything went fine, as you mentioned.

but yes, something must not have finished with this move.

 

thanks for your reply.

 

regards

philipp