ONTAP Discussions

What is in the snapshot created by the deduplication process?

volaresteve
3,880 Views

I had to reinitialize a deduplication on a large volume. Right now the snapshot created is about 4TB. I would like a better understanding of what is in that file and when do I get the space back. Please help!

4 REPLIES 4

paleon
3,880 Views

Is the snapshot the one that is created by the deduplication job?  In other words, does the snapshot name start with "sis."?  If so, as per NetApp's KB article 3100755, if all deduplication jobs are completed on the volume, the SIS snapshot can be deleted.  Please read the entire article for specific details.  (https://kb.netapp.com/support/index?page=content&id=3011755).


If the snapshot was not created by the deduplication process and existed before the deduplication job finished, the space will be freed when the snapshot is deleted.  The snapshot contains an image of the data as it existed at a specific point in time.  If a snapshot exists before a deduplication job runs, that snapshot will contain the image of the data before it was deduplicated.  The active file system no longer points to the duplicate data blocks, but the inode pointers in the snapshot still do.

I hope this answers your question.  If not, please let me know.

volaresteve
3,880 Views

I have read that article before. I am specifically referring to the sis snapshot that is created. My process is still running and the snapshot is growing. I was more curious about what the snapshot contains. I am aware of how it came to be and what happens as the deduplication process finishes. Just looking for more details about  how and why it grows to the size it is. 

ERIC_TSYS
3,880 Views

Hi,

The snapshot is taken to compare changed blocks against the changed blocks in the filesystem whilst allowing new data to go into the filesystem.

Its a bit like a snapmirror snapshot where a snapshot is taken on source and destination, then the 2 snapshots are compared for new blocks and those blocks are snapmirrored across. once its all done the snapshots should automatically disappear. However for snapmirror there will be a baseline snapshot that will need to be kept of course.

does this explain well enough?

Eric

paleon
3,880 Views

To explain why the SIS snapshot grows, I think it essential to explain how WAFL and snapshots work in general.

When data is written to a striped file system with parity (i.e. RAID4, RAID5, RAID-DP, etc.), there is an inherent write performance problem.  If data already exists within a stripe, the system must read the data already contained in the stripe so the system can accurately calculate parity.  In other words, if a server has an 8+1 RAID5 array and a process changes one (1) data block, the server's RAID card will need to read the other 7 data blocks in the stripe, calculate parity across all 8 data blocks, and then write both the updated data block and the updated parity data block.  As a result, data writed to a RAID5 array can have very poor performance.

NetApp storage controllers resolve this problem in two (2) ways.  The first is using system memory to create a write-cache.  NVRAM/NVMEM contains a backup copy of the data writes, and the NVRAM/NVMEM is protected by a battery.  The "NV" stands for Non-Volatile.  The WAFL file system provides the second component of the solution.  When existing data is changed, WAFL does not change the existing data blocks.  It allocates new data blocks and updates inodes to point the active file system to the newly allocated data blocks.  This allows WAFL to write data blocks to empty stripes.  Since the stripe is empty, the NetApp does not need to read existing data blocks to correctly calculate parity.  Since the to be written data blocks are already in system memory, calculating the necessary parity blocks can be accomplished very rapidly.  The NetApp then uses background maintenance processes to clean up data blocks which are no longer in use and to re-arrange used data blocks to create empty stripes.  Keeping sufficient unused space in aggregates and volumes is essential to the successful re-arrangement of used data blocks.  Re-arranging the used data blocks is essential to maintaining storage system performance.

Snapshots contain an image of the data as it existed at a point in time.  At a high level, when a snapshot is created on a NetApp volume (or aggregate), the NetApp preserves a copy of the inode table.  This preserves the list of which data blocks are in use.  When data in the active file system is modified or deleted, the inode table for the snapshot will still point to the data blocks which were in use when the snapshot was created.  Data blocks which are part of either the active file system or snapshots are "in use."  Therefore the background process to clean up "unused" data blocks will not modify data blocks which are protected by a snapshot.  When a data block is no longer pointed to by the active file system, that data block consumes the snapshot reserve of its volume (or aggregate).  As a result, snapshots only consume space when its data blocks are not pointed to by the active file system.

As an example, let's say:

* A 100GB volume has a 20% snapshot reserve and contains a 10GB file.
     - The volume has 80 GB of allocated space, of which 10GB (or 12.5%) is used.
     - The volume's snapshot reserve has 20GB of allocated space of which 0GB (or 0%) is used.

* We then create a snapshot named "test.0".
     - The volume still has 80 GB of allocated space, of which 10GB (or 12.5%) is used.
     - The volume's snapshot reserve still has 20GB of allocated space of which 0GB (or 0%) is used.
* We then delete the 10GB file.
     - The volume still has 80 GB of allocated space, of which 0GB (or 0%) is used.

     - The volume's snapshot reserve still has 20GB of allocated space of which 10GB (or 50%) is used.

* We then create a new 10GB file.

     - The volume still has 80 GB of allocated space, of which 10GB (or 12.5%) is used. 

     - The volume's snapshot reserve still has 20GB of allocated space of which 10GB (or 50%) is used.

* We then create a snapshot named "test.1".
     - The volume still has 80 GB of allocated space, of which 10GB (or 12.5%) is used.
     - The volume's snapshot reserve still has 20GB of allocated space of which 10GB (or 50%) is used.
* We then delete the new 10GB file.
     - The volume still has 80 GB of allocated space, of which 0GB (or 0%) is used.

     - The volume's snapshot reserve still has 20GB of allocated space of which 20GB (or 100%) is used.

If we continue creating 10GB files, creating snapshots, and deleting the 10GB files, the snapshot reserve will consume more than 100% of its allocated space.  As a result, the usable space in the volume will not return to 80GB after the deletions.  I recommend creating a small test volume (say 100 MB) and testing this for yourself.

So, getting back to the SIS snapshot...

The SIS snapshot will grow in size if it contains data blocks which are no longer pointed to by the active file system or by a snapshot with an earlier creation date (which might happen if other scheduled snapshots occur after the SIS job is initiated).  Since SIS identifies duplicate data blocks and updates the inode table of the active file system when duplicate blocks are found, it is reasonable for the SIS snapshot to grow in size.  The snapshot will also grow if there are a large number of file deletions or modifications while the SIS job is running.  This is because the SIS snapshot behaves like any other snapshot.

Once the SIS job completes, the SIS snapshot will be deleted.  If the de-duplicated data blocks are no longer pointed to by the active file system and are not pointed to by any snapshots, the data blocks will become unused.

Please let me know if I answered your question successfully.  Also please let me know if you would like any additional clarification on snapshots.

Public