ONTAP Discussions

Deduplication - Can this effect the ability to grow a flex volume?

mark_whitelaw
3,810 Views

Have a FAS 2050 in production and a FAS2020 in DR both running  Ontap version 7.3.4 running SnapMirror

  • Existing replicated SM Flex Volume of 1TB to the FAS2020 in DR
  • Volume has deduplication turned on in source.

I have a few questions:

  1. I have looked at the size limits for the FAS2020 with ontap 7.3.4 via TR-3505 and it states that it has a limit of 1TB. Does this limit the ability to grow the volume past the 1tb limit? We get a error 'too larger 1024'? or will it allow the volume resize and just error out on the Duplication process?
  2. Does this limit take into deduplicated blocks even if you have SIS turned off on the volume and hence prevents you from increasing the size of the flex volume?

Any help here would be much appreciated.

4 REPLIES 4

peter_lehmann
3,810 Views

Hi

This error is correct, if you want to grow the destination volume beyond the 1TB Dedupe limit, you will have to:

1. break snapmirror

2. stop dedupe

3. "sis undo" dedupe (priv set diag)

4. and then update the snapmirror

WARNING - for this procedure:

To increase the size of an A-SIS enabled volume beyond the maximum limit for A-SIS, the A-SIS service must be turned off and the changes undone. Undoing A-SIS will re-inflate the file system and could require more disk space than is available in the A-SIS enabled volume. There is no way to expand the volume size until the undo is completed, so the recommended course of action is to create and use a temporary volume and migrate data necessary to free enough space for the re-inflation to complete.

The destination volume will always be non deduped if you want to stay with the fas2020 and >1TB.

Other options:

- replace fas2020 with 2040 or 2050 or even bigger

- upgrade to Ontap 8.0.1 (or higher) where the limits are different (but you'll also need to replace the 2020 in this case, because this controller cannot run anything higher then 2.x)...

Hope this helps,

Peter

mark_whitelaw
3,810 Views

Thanks Peter,

Correct me if I'm wrong here but dedup on the destination should not really do a lot as its turned on at Source so effectively the deduplicated blocks are being replicated as it is.

So you are getting the benefits anyway. I guess this just all depends on the rate of change and the frequency of the replication passes as to whether you are going to see full benefits of the dedup in the destination.

So if this is the case turning off dedup at destination, setting up another flexvol along with a new snapmirror job (schedule) initialisation, should give similar results again depending on rate of change for the data and the frequency of the replication passes.

This way we can allow the volume to grow in a suitable manner (> 1tb) and not limited via the constraints of FAS2020 and the ontap version at this point, or atleast until they hit the 2tb limit at the source on the FAS2050.

regards

Mark

peter_lehmann
3,810 Views

Hi Mark

Yes you are correct with that the destination is not "involved" in the dedupe process in a Volume SnapMirror, however with Volume SnapMirror the lower volume size limit has higher priority, so you are "bound" to the 1TB limit of the 2020.

If you want to do the "trick" with the new volume on the destination, you would need to change it to a QTREE SnapMirror. Then you "loose" the bandwidth savings of transfering only deduped blocks, but you could grow the destination to the vol limit of the FAS2020 (until they hit the 2TB 2050 limit).

Hope this helps,

Peter

Excerpt of TR-3505:

VOLUME SNAPMIRROR

Volume SnapMirror allows you to back up your data to another location for disaster recovery purposes.

Deduplication is supported with volume SnapMirror.

Volume SnapMirror operates at the physical block level; thus when deduplication is enabled on the source,

the data sent over the wire for replication is also deduplicated and therefore the savings are inherited at the destination.

This can significantly reduce the

amount of network bandwidth required during replication.

To run deduplication with volume SnapMirror:

- Deduplication can only be managed on the source system—the flexible volume at the destination

system inherits all the efficiency attributes and storage savings.

- Shared blocks are transferred only once, so deduplication reduces network bandwidth usage.

- The volume SnapMirror update schedule is not tied to the deduplication schedule.

- Maximum volume size limits for deduplicated volumes are constrained to the lower limit between the

source and the destination systems.

mark_whitelaw
3,810 Views

Again Thanks Peter,

I had over looked (putting it nicely on myself) the constraint of the 'Maximum volume size limits for deduplicated volumes are constrained to the lower limit between the

source and the destination systems'.

For now the best approach is stick with deduplication and restructure the data sets and create new volumes accordingly within the maximums and the client at some point will look at newer / higher performing filers.

Thanks for your time.

Cheers

Mark

Public