ONTAP Discussions

Removing a couple snapmirror snapshots

lutzadmins
6,903 Views

We are haing an issue with how much data is getting snapmirrored to our DR site and until we figure it out we will never catch up on our lagtime.  Can we just delete a daily snapshot folder so that that sill not get transferred or is that a bad thing.  We need to stop all transfers remove teh last couple backups and start fresh.  Is there an easier way to get this accomplished?  Any help is appreciated.  Thanks

Perry

7 REPLIES 7

peter_lehmann
6,903 Views

Hi Perry

*We are haivng an issue with how much data is getting snapmirrored to our DR site*

This seems to be something that will not get solved by my answer below. You need to make sure that the WAN bandwidth and the amount of data you need to transfer within the time you have are matching. Sometimes one needs to add bandwidth to solve this, or change the volume layout or use compression with SnapMirror... Whatever fits your needs best.

You can delete some of the daily,hourly,weekly snapshots to optimise the transfer of changed blocks, but his will probably not make up a lot of blocks.

Use the command

filer> snap delta "volname"

to find out how much changed data you have per snapshot and then find a way to minimize the data change within this volume, or see first part of the answer.

Peter

lutzadmins
6,903 Views

Hi Peter,

Thanks for the reply.  I just wanted to clarify a couple things i may have left out.  We run snapmirror and snapvault jobs nightly.  The snapmirror job only runs one nightly job for a 7 day period.  this is strictly for DR purposes.  That snapmirror jobs are running with compression enabled.  We have 3 datastores that get snapmirrored and they were all about 10 to 12GB in size which was ok but then all of the sudden 2 of teh datastores went up to 25 to 30GB.  now the lag times are too long since the jobs can't finish in a timely manner.  It is only the VMare side of the backup that i am concerned with.  We run snapvault for our CIFS data and snapmirror for all the VMware data.  As for what i can delete or not, my .SNAPSHOT folder on the vmware side only has 8 snapshot folders.  I was curious if i was able to stop the job, remove one or two of the folders, then restart teh job without causing any issues.  I am trying to figure out why the size has changed but until then i need to find a way to get back on a schedule.  I have tweaked all the data so that all the swap locations are on a differnt datastore that doesn't get replicated.  Anyhow, that is what i'm looking to do and not sure if i'm able to do it without issues.  Thanks again for teh reply.

Perry

peter_lehmann
6,903 Views

Hi Perry

Thanks for the detailed informations.

I have a volume with lots of snapshots (smsql and snapmirror related):

filer> snap list volume_sql

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  0% ( 0%)    0% ( 0%)  Jul 27 14:01  filer1(0050406717)_filer2_volume_sql.2124 (snapmirror)

  0% ( 0%)    0% ( 0%)  Jul 27 14:01  @snapmir@{2D2893E5-74B3-4C7C-8B55-39DA1E1E2E91}

  0% ( 0%)    0% ( 0%)  Jul 27 14:00  sqlsnap__volume__recent

  0% ( 0%)    0% ( 0%)  Jul 27 12:01  @snapmir@{B8FF2030-61EC-493B-9E43-63333A06046A}

  0% ( 0%)    0% ( 0%)  Jul 27 12:00  sqlsnap__volume_07-27-2011_12.00.19

  0% ( 0%)    0% ( 0%)  Jul 27 10:00  sqlsnap__volume_07-27-2011_10.00.20

  0% ( 0%)    0% ( 0%)  Jul 27 08:00  sqlsnap__volume_07-27-2011_08.00.20

  0% ( 0%)    0% ( 0%)  Jul 27 06:01  sqlsnap__volume_07-27-2011_06.00.57

  0% ( 0%)    0% ( 0%)  Jul 27 04:01  sqlsnap__volume_07-27-2011_04.00.45

  0% ( 0%)    0% ( 0%)  Jul 26 20:00  sqlsnap__volume_07-26-2011_20.00.19

  0% ( 0%)    0% ( 0%)  Jul 26 18:00  sqlsnap__volume_07-26-2011_18.00.19

  0% ( 0%)    0% ( 0%)  Jul 26 16:00  sqlsnap__volume_07-26-2011_16.00.19

  0% ( 0%)    0% ( 0%)  Jul 26 14:00  sqlsnap__volume_07-26-2011_14.00.19

  0% ( 0%)    0% ( 0%)  Jul 26 12:00  sqlsnap__volume_07-26-2011_12.00.20

In this case, you can delete all snapshots except the top few with snapmirror and @snapmir@.

Your volume will have also snapshots with (snapmirror) in them and others without. The ones without you can delete, without risking to reinitialize the snapmirror.

However, if you need to get up to date with the snapmirror you will need to give the next update enough time to complete and then the one after as well and so forth.

Wishing you patience

Peter

PS If you can provide the snap list I can give you even more details.

lutzadmins
6,903 Views

That is great!  thanks for the detail!!!  Here is my snap list

Volume lutz_datastore0

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  1% ( 1%)    1% ( 1%)  Jul 26 17:03  smvi__NIghtly Datastore0_recent

  7% ( 6%)    3% ( 3%)  Jul 25 17:03  smvi__NIghtly Datastore0_20110725170001

  8% ( 1%)    4% ( 1%)  Jul 25 11:03  LUTZDR1(0135105169)_lutz_datastore0.172 (b                          usy,snapmirror)

  9% ( 1%)    4% ( 0%)  Jul 25 01:01  smvi__NIghtly Datastore0_20110725005755 (b                          usy)

10% ( 2%)    5% ( 1%)  Jul 23 17:03  smvi__NIghtly Datastore0_20110723170001 (b                          usy)

13% ( 3%)    6% ( 1%)  Jul 22 18:55  smvi__NIghtly Datastore0_20110722185131 (b                          usy)

13% ( 0%)    6% ( 0%)  Jul 22 18:50  LUTZDR1(0135105169)_lutz_datastore0.171 (b                          usy,snapmirror)

15% ( 2%)    7% ( 1%)  Jul 21 17:03  smvi__NIghtly Datastore0_20110721170002

16% ( 2%)    8% ( 1%)  Jul 20 17:03  smvi__NIghtly Datastore0_20110720170001

Volume np_datastore0

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

13% (13%)    5% ( 5%)  Jul 26 21:03  smvi__Lutz_Datastore2_recent

13% ( 0%)    6% ( 0%)  Jul 26 19:02  smvi__Lutz Datastore1_recent

13% ( 0%)    6% ( 0%)  Jul 26 17:03  smvi__NIghtly Datastore0_recent

20% ( 8%)    9% ( 4%)  Jul 25 21:02  smvi__Lutz_Datastore2_20110725210002

21% ( 2%)   10% ( 1%)  Jul 25 19:01  smvi__Lutz Datastore1_20110725190001

21% ( 1%)   10% ( 0%)  Jul 25 17:03  smvi__NIghtly Datastore0_20110725170001

23% ( 3%)   12% ( 1%)  Jul 25 01:01  smvi__NIghtly Datastore0_20110725005755

25% ( 3%)   13% ( 1%)  Jul 24 21:03  smvi__Lutz_Datastore2_20110724210001

25% ( 0%)   13% ( 0%)  Jul 24 19:01  smvi__Lutz Datastore1_20110724190001

28% ( 4%)   15% ( 2%)  Jul 23 21:02  smvi__Lutz_Datastore2_20110723210001

28% ( 0%)   15% ( 0%)  Jul 23 19:01  smvi__Lutz Datastore1_20110723190001

28% ( 0%)   15% ( 0%)  Jul 23 17:03  smvi__NIghtly Datastore0_20110723170001

31% ( 6%)   17% ( 2%)  Jul 22 21:02  smvi__Lutz_Datastore2_20110722210001

32% ( 2%)   18% ( 1%)  Jul 22 19:01  smvi__Lutz Datastore1_20110722190001

32% ( 0%)   18% ( 0%)  Jul 22 18:55  smvi__NIghtly Datastore0_20110722185131

35% ( 7%)   21% ( 3%)  Jul 21 21:03  smvi__Lutz_Datastore2_20110721210002

36% ( 2%)   22% ( 1%)  Jul 21 19:02  smvi__Lutz Datastore1_20110721190001

36% ( 0%)   22% ( 0%)  Jul 21 17:03  smvi__NIghtly Datastore0_20110721170002

39% ( 7%)   25% ( 3%)  Jul 20 21:02  smvi__Lutz_Datastore2_20110720210002

40% ( 2%)   26% ( 1%)  Jul 20 19:02  smvi__Lutz Datastore1_20110720190001

40% ( 0%)   26% ( 0%)  Jul 20 17:03  smvi__NIghtly Datastore0_20110720170001

Volume lutz_datastore1

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  1% ( 1%)    0% ( 0%)  Jul 26 19:05  LUTZDR1(0135105169)_lutz_datastore1.163 (b                          usy,snapmirror)

  1% ( 0%)    0% ( 0%)  Jul 26 19:02  smvi__Lutz Datastore1_recent (busy)

  4% ( 2%)    1% ( 1%)  Jul 25 19:01  smvi__Lutz Datastore1_20110725190001 (busy                          )

  5% ( 2%)    2% ( 1%)  Jul 24 19:02  LUTZDR1(0135105169)_lutz_datastore1.162 (b                          usy,snapmirror)

  5% ( 0%)    2% ( 0%)  Jul 24 19:02  smvi__Lutz Datastore1_20110724190001

  7% ( 2%)    2% ( 1%)  Jul 23 19:01  smvi__Lutz Datastore1_20110723190001

12% ( 5%)    4% ( 2%)  Jul 22 19:01  smvi__Lutz Datastore1_20110722190001

14% ( 3%)    4% ( 1%)  Jul 21 19:02  smvi__Lutz Datastore1_20110721190001

16% ( 3%)    5% ( 1%)  Jul 20 19:02  smvi__Lutz Datastore1_20110720190001

Volume lutz_datastore2

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  1% ( 1%)    0% ( 0%)  Jul 26 21:03  LUTZDR1(0135105169)_lutz_datastore2.156 (b                          usy,snapmirror)

  1% ( 0%)    0% ( 0%)  Jul 26 21:03  smvi__Lutz_Datastore2_recent (busy)

  4% ( 3%)    1% ( 1%)  Jul 25 21:06  LUTZDR1(0135105169)_lutz_datastore2.155 (b                          usy,snapmirror)

  4% ( 0%)    1% ( 0%)  Jul 25 21:03  smvi__Lutz_Datastore2_20110725210002

  7% ( 3%)    2% ( 1%)  Jul 24 21:03  smvi__Lutz_Datastore2_20110724210001

  8% ( 1%)    3% ( 0%)  Jul 23 21:02  smvi__Lutz_Datastore2_20110723210001

10% ( 2%)    4% ( 1%)  Jul 22 21:02  smvi__Lutz_Datastore2_20110722210001

12% ( 3%)    5% ( 1%)  Jul 21 21:03  smvi__Lutz_Datastore2_20110721210002

15% ( 3%)    6% ( 1%)  Jul 20 21:02  smvi__Lutz_Datastore2_20110720210002

peter_lehmann
6,903 Views

Because the snapshots are related to SMVI, make sure to delete them through SMVI (delete backup) ! And ONLY if you really want to delete them...

Here is a list of what you could delete:

Volume lutz_datastore0

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  1% ( 1%)    1% ( 1%)  Jul 26 17:03  smvi__NIghtly Datastore0_recent

  7% ( 6%)    3% ( 3%)  Jul 25 17:03  smvi__NIghtly Datastore0_20110725170001

16% ( 2%)    8% ( 1%)  Jul 20 17:03  smvi__NIghtly Datastore0_20110720170001

Volume np_datastore0

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

13% (13%)    5% ( 5%)  Jul 26 21:03  smvi__Lutz_Datastore2_recent

13% ( 0%)    6% ( 0%)  Jul 26 19:02  smvi__Lutz Datastore1_recent

13% ( 0%)    6% ( 0%)  Jul 26 17:03  smvi__NIghtly Datastore0_recent

20% ( 8%)    9% ( 4%)  Jul 25 21:02  smvi__Lutz_Datastore2_20110725210002

21% ( 2%)   10% ( 1%)  Jul 25 19:01  smvi__Lutz Datastore1_20110725190001

21% ( 1%)   10% ( 0%)  Jul 25 17:03  smvi__NIghtly Datastore0_20110725170001

23% ( 3%)   12% ( 1%)  Jul 25 01:01  smvi__NIghtly Datastore0_20110725005755

25% ( 3%)   13% ( 1%)  Jul 24 21:03  smvi__Lutz_Datastore2_20110724210001

25% ( 0%)   13% ( 0%)  Jul 24 19:01  smvi__Lutz Datastore1_20110724190001

28% ( 4%)   15% ( 2%)  Jul 23 21:02  smvi__Lutz_Datastore2_20110723210001

28% ( 0%)   15% ( 0%)  Jul 23 19:01  smvi__Lutz Datastore1_20110723190001

28% ( 0%)   15% ( 0%)  Jul 23 17:03  smvi__NIghtly Datastore0_20110723170001

31% ( 6%)   17% ( 2%)  Jul 22 21:02  smvi__Lutz_Datastore2_20110722210001

32% ( 2%)   18% ( 1%)  Jul 22 19:01  smvi__Lutz Datastore1_20110722190001

32% ( 0%)   18% ( 0%)  Jul 22 18:55  smvi__NIghtly Datastore0_20110722185131

35% ( 7%)   21% ( 3%)  Jul 21 21:03  smvi__Lutz_Datastore2_20110721210002

36% ( 2%)   22% ( 1%)  Jul 21 19:02  smvi__Lutz Datastore1_20110721190001

36% ( 0%)   22% ( 0%)  Jul 21 17:03  smvi__NIghtly Datastore0_20110721170002

39% ( 7%)   25% ( 3%)  Jul 20 21:02  smvi__Lutz_Datastore2_20110720210002

40% ( 2%)   26% ( 1%)  Jul 20 19:02  smvi__Lutz Datastore1_20110720190001

40% ( 0%)   26% ( 0%)  Jul 20 17:03  smvi__NIghtly Datastore0_20110720170001

Volume lutz_datastore1

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  5% ( 0%)    2% ( 0%)  Jul 24 19:02  smvi__Lutz Datastore1_20110724190001

  7% ( 2%)    2% ( 1%)  Jul 23 19:01  smvi__Lutz Datastore1_20110723190001

12% ( 5%)    4% ( 2%)  Jul 22 19:01  smvi__Lutz Datastore1_20110722190001

14% ( 3%)    4% ( 1%)  Jul 21 19:02  smvi__Lutz Datastore1_20110721190001

16% ( 3%)    5% ( 1%)  Jul 20 19:02  smvi__Lutz Datastore1_20110720190001

Volume lutz_datastore2

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  4% ( 0%)    1% ( 0%)  Jul 25 21:03  smvi__Lutz_Datastore2_20110725210002

  7% ( 3%)    2% ( 1%)  Jul 24 21:03  smvi__Lutz_Datastore2_20110724210001

  8% ( 1%)    3% ( 0%)  Jul 23 21:02  smvi__Lutz_Datastore2_20110723210001

10% ( 2%)    4% ( 1%)  Jul 22 21:02  smvi__Lutz_Datastore2_20110722210001

12% ( 3%)    5% ( 1%)  Jul 21 21:03  smvi__Lutz_Datastore2_20110721210002

15% ( 3%)    6% ( 1%)  Jul 20 21:02  smvi__Lutz_Datastore2_20110720210002

lutzadmins
6,903 Views

Well, i'm not really sure what you meant by remove it from SMVI.  I know what SMVI is but not sure how those certain folders get removed.  Are you referring to removing teh backup job and recreating it??   Got one more question concerning this issue.  From the attachment below can you tell me if both np_Datastore0 and Lutz_Datastore0 are both being replicated?  I have all the SWAp Volumes on another datastore and they are nos supposed to be replicated to the DR site.  Having both of those boxes checked in the backup seems to me that both are geting done.  Any thoughts??

peter_lehmann
6,903 Views

In SMVI there is a list of available backups (in the restore section). There you have the option to solect a backup and delete it. This will also delete the correxponding snapshot on the storage system. You do not need to delete the job.

There seem to be three volumes being replicated:

lutz_datastore0

lutz_datastore1

lutz_datastore2

The np_datastore0 is not being replicated, only local snapshots are being created from SMVI Backups.

Peter

Public