Community

Subscribe
Highlighted

please explain to me the snap list on the source filer in snapmirror

The vol1 is 100% full again. My question is not about sovling volume full issue. but, more about the snapmirror in a more granular way. Please see the following outputs on the source filer "filer2". It seems to me that the size of the snap caused by the snapmirror is 1911GB, the rest of the space is taken by the volume (FS) itself. Could anybody please explain to me in detail about the output of "snap list vol1"?

-  what does exactly the snap include, complete copy of volume1, and plus all snapshots since the first full copy? Why do I have to continue to leave the full copy on the source filer, after it has already copied over to DR site?

- has this listed snap alredy been copied to drfiler1? or just the full sets of snapshots?

- is there any way to list the data in detail as to what is the full copy of volume and what are those snapshots, when were those snapshots taken individually?

thanks for your help!

filer2> df -rg vol1
Filesystem               total       used      avail   reserved  Mounted on
/vol/vol1/       8193GB     8148GB        0GB        0GB  /vol/vol1/
/vol/avol1/.snapshot        0GB     1911GB        0GB        0GB  /vol/vol1/.snapshot
filer2> snap list vol1
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
23% (23%)   23% (23%)  Oct 09 06:00  drfiler(0151735037)_vol1.806 (snapmirror)

Re: please explain to me the snap list on the source filer in snapmirror

Snapshot unique data contains blocks that were deleted or overwritten since snapshot had been taken. I do not think you can see whether snapshot was fully transferred in snap list output - you need to check snapmirror status. Although indirectly when SnapMirror transfer is in progress, you see two snapshots on source - so we may assume transfer finished (but successfully or not we do not know).

Re: please explain to me the snap list on the source filer in snapmirror

Okay. Where could you see "two snapshots" on source? I  only see one.

The snapmirror was established a month ago. I can olny see one line as the  result of  "snap list".  So, does this snap with 1911GB include all changes/owverwritten since first initialization?

Re: please explain to me the snap list on the source filer in snapmirror

Snapshot by definition cannot include any change "since". It's content is frozen at the moment snapshot is created.

Re: please explain to me the snap list on the source filer in snapmirror

Okay. Understood. Thanks for clearifications.  I am ursorry, but still could not fully understand my questions in mind. This snapmirror has been scheduled once every half hour on the destiantion, and it seems working by using"snapmirror status".

So, the following line by "snap list" contains every single snapshot (understood it is frozen image copy) since first initialization? and that's why it is so big, with the size of 1911GB? if all snapshots have already transferred to the destination ( I guess this is how the destination keeps the original copy and all changes being made on the source), why do we need to keep all these snapshots on the source?

Thank for your patience.

filer2> snap list vol1

Volume vol1

working... 

  %/used       %/total  date          name
----------  ----------  ------------  --------
23% (23%)   23% (23%)  Oct 09 06:00  drfiler(0151735037)_vol1.806 (snapmirror)

Re: please explain to me the snap list on the source filer in snapmirror

SnapMirror does not keep all snapshots. It needs only one - latest - snapshot as baseline. During next update it creates new snapshot and transfers difference between baseline and current snapshot. After that old baseline is removed and last transferred snapshot becomes new baseline.

Re: please explain to me the snap list on the source filer in snapmirror

You are incorrect assuming the snapshot shown in "snap list" contains every single snapshot since initialization.  On initialization, a "base" snapshot will be taken at the source, and that point-in-time data is copied over to the destination volume.  This initialization data includes all existing snapshots, including the base that was just taken.  At this point you've got a "Snapmirrored" relationship.  Remember that, at the time of creation, a snapshot takes no real space, since it just contains pointers back to the original data.  Only as that original data changes are the snapshot blocks used.

When you update the existing relationship, a "differential" snap is taken at the source.  These two source snaps plus the destination snaps (copied over during the last transfer) are used to compute the data that has changed since the last transfer.  These differences are copied over to the destination volume - including the snap just taken - and then the base snapshot at the source is removed, since the newer snapshot now has the point-in-time reference image, and it not becomes the base for the next update.  This is what aborzenkov is referring to when he says that you see two source snapshots when a transfer is in progress.

So your 1911G snapshot essentially contains the _changes_ to the volume between when the snapshot was taken and now -_not_ the changes since initialization.

You say that the snapmirror has been scheduled for every 30 minutes - but this snapshot is from Oct 9.  The date of the snapshots will update with a successful transfer (and the size will also go down), so I think something is wrong with the schedule.  What does "snapmirror status" show as the lag time?  Will you post the "snap list" output of the destination volume too?

Bill

Re: please explain to me the snap list on the source filer in snapmirror

Hi Bill,

Thank you so much for such detailed explanations which cleared out quite confusions in my mind.

As you indicated, there should be something wrong with the snapmirror, and now I feel the size of the snapshot should not be so big (1911GB). The total of the volume size is about 8TB.

I am sorry, but I have not accurately stated my situation:

-     a)                 the snapmirror for this volume is scheduled as following, not every half hour as I said earlier:
netapp2:vol1 drfiler1:vol1 - 0-59/59 * * *

How to explain this schedule? Does the update start every hour based on this schedule? Maybe this schedule caused the problem of that every month or so the volume gets full?

-     b)    The output of snap list you see here  was up to 10/09 when the volume got full, and  I therefore broke  the snapmirror off on that day.

The following are outputs you ask for, and again it was up to 10/09:

drfiler1> snap list vol1

Volume vol1

  1. working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  0% ( 0%)    0% ( 0%)  Oct 09 06:00  drfiler1(0151735037)_vol1.806

  0% ( 0%)    0% ( 0%)  Oct 09 05:59  drfiler1(0151735037)_vol1.805

drfiler1> df -rg vol1

Filesystem               total       used      avail   reserved  Mounted on

/vol/vol1/       8193GB     8135GB       57GB        0GB  /vol/vol1/

/vol/vol1/.snapshot        0GB        0GB        0GB        0GB  /vol/vol1/.snapshot

drfiler1> snapmirror status vol1

Snapmirror is on.

Source                 Destination              State          Lag        Status

netapp2:vol1  drfiler1: vol1  Broken-off     126:33:39  Idle

Thanks again for your patience.

Re: please explain to me the snap list on the source filer in snapmirror

0-59/59 would, to me, indicate that it's going to update the 59th minute of every hour, though it's kind of a convoluted way to get there.  But the fact that your destination snapshots are a minute apart seems to say it's going every minute, which is a bit extreme.  You should be able to see in /etc/messages and/or /etc/log/snapmirror what it's trying to do and what it's saying.

Try changing your schedule to 59 * * * (which will run at 1 minute before each hour), resync the relationship, and see what happens.  You will need to grow the source so it can write a new snapshot, but after the sync it should remove that 1911G one.

Bill

Re: please explain to me the snap list on the source filer in snapmirror

Got your point.

I could not grow the source, since the aggr where the  volume is located is completely full, I could not grow the source.

Could I remove the 1911G first? it seems to me that this snapshot maybe already corrupted. If I could, then do I have to reinitialize or I could resync?