2011-05-18 09:35 AM
This is a question regarding how intelligent Protection Manager is or how intelligent it can be based on possible configuration changes. I have a dataset with 21 volumes of different types that are associated with a single SnapManager for Oracle profile. When the dataset starts replicating to the remote data center it devides the defined throttle indicated in the throttle schedule by the number of volumes in that dataset and starts replicating each volume concurrently at the rate of the throttle devided by 21 (number of volumes). Obviously, if there are 21 volumes of different types and different data growth rates, the set transfer rate for each of these volumes is going to create a situtation were the volumes with higher data growth rates will take much longer to replicate. Is there any way to configurre Protection Manager to provide for more dynamic throttling, so that when one of the volumes in the data set completes, that transfer rate is reallocated to the currently replicating volumes?
Thanks in advance!
2011-05-18 06:58 PM
Actually, I don't think it does reallocate the transfer rate to the rest of the volumes that are still currently replicating, as I do not see that happening. Actually, one of the 21 volumes in the dataset continued to replicate for more than 12 hours after the completed volumes, until I ran "snapmirror throttle 3072 filer:volumename", at which it picked up speed tremendously and finished the last 8GB in the next few hours. Does anyone have any ideas?
2011-06-02 08:14 PM
I will have to take back, yes, throttling in PM is not dynamic, its a static one, and remains so until the job completes.
It doesnt re-allocate the bandwith of the finshed volumes to still running ones.