Hi,
I’ve configured a Protection Manager dataset with a Provisioning policy to create secondary (SnapMirror) storage when a primary storage volume is added to the Dataset. The dataset protection policy is just a single Mirror. The provisioning policy defines a resource group containing 4 aggregates at DR, and has the default options (ie just requires RAID-DP) .
The process is:
- Add volume to dataset.
- Dataset defines a Snapmirror relationship using a ‘Secondary’ type provisioning policy for the Mirror, so it will create a secondary volume using a resource pool called ‘DR – SATA’.
- Resource pool 'DR - SATA' contains 4 aggregates, all using 1TB SATA, all the same size. The utilization on these aggrs is as follows:
drfiler1:aggr00_sata = 69%
drfiler1:aggr01_sata = 74%
drfiler2:aggr02_sata = 33%
drfiler2:aggr03_sata = 40%
The question is about how the Provisioning Policy selects the aggregate to provision the SnapMirror destination volumes. I’ve tested this but strangely, it is selecting aggr00_sata for the mirror destination volumes. Based on usage, I would expect it to choose the one with the most free space (drlfiler2:aggr02_sata). Generally, the disk I/O and filer cpu is significantly lighter on drfiler2, so I don't think it can be selecting drfiler01 based on performance.
Does anyone know if there are any logs, etc which can be used to determine what the decision making process was?
Thanks,
Craig