Subscribe

Protection Manager load balancing and sizing questions

I'm working with a customer to design their PM environment in preparation for the 3.8 release.  A couple of questions have come up that I can not find any documentation on:

  1. If we create a Resource Pool across multiple aggregates from multiple controllers, how will PM balance the destinations within those aggregates.
  2. With the customers old SV environment based on BCO, we had to manually balance how many relationships were being SnapVaulted at a time, to maintain a maximum SnapVault thread count.  Is PM capable of managing for this, such that if we put 200 SnapVault relationships into a single dataset to kick off at 8pm, will it run the maximum number of jobs that both the source/destination can handle?  Or, will it hammer the systems and get the no available threads message?
  3. Does PM manage destination volume sizing for SnapVault or SnapMirror relationships?  If the primary data volume is grown, will PM update the size of the mirror destination.   They are also interested in how the sizing is calculated for SV relationships as they are created.

Thanks.

Re: Protection Manager load balancing and sizing questions

Hi Mike --

1.  How does Protection Manager select aggregates.

There's a lot of detail, but the basic plan is we first filter out all the aggregates that don't work for some reason (e.g. wrong licenses, overfull, storage system down, etc.).  Of the one that are left, we pick the one with the most free space (based on bytes, not percentage).

2.  SnapVault update scheduling.

Protection Manager tries its best to sort this out for you.  There's an option you can set per host specifying how many transfers the host can sustain (in the upcoming release, we can figure this out ourselves, assuming the storage system is running a recent version of ONTAP).  If we overload the storage system, we know how to back off and retry.

The bottom line is we should figure that out so the user's don't need to.

3.  Secondary volume sizing.

Simple question, complicated answer.  Generally, we try to thin provision the secondary volumes, creating them the size of the destination aggregate, but turning off the space guarantee.  This lets the volume use what space it needs until the aggregate gets close to filling up.  When it does, we'll create a new secondary volume on a new aggregate.  There are ways to limit this if you're not comfortable letting Protection Manager mange the space so aggressively.

In the upcoming release, we're enabling a model where we don't create quite so large secondary volumes and adjust the secondary volume size before each update.  This avoids some issues we discovered with ONTAP 7.3, plus it's easier for humans to wrap their heads around.

-- Pete