I have a customer who is trying to add a volume to a dataset that uses QSM for backups, but when he adds it to the dataset, the conformance check indicates that it will create a new volume for this relationship on another aggregate in the dataset instead of putting it in the same volume as the other source volume in the dataset. Here is some config info:
SourceVol1=2GB, 495GB used, 0GB snapreserve
Resource pool has 7 aggregates
SourceVol1's aggregate has 3.6TB of space available
SourceVol2, the volume to be added, is 2GB, 450GB used, 0GB snapreserve
When I try to add SourceVol2 it wants to provision a new 2.64TB volume on another aggregate in the resource pool.
Why is it doing this? I want it to put it into the same volume as SourceVol1's backup. Dynamic secondary resize is enabled. dpMaxFanInRatio is 100.
I have enough space in that aggregate. And yes, I'm adding it as a qtree, /vol/SourceVol2/-, not a volume.
So, we manually increased the destination volume of SourceVol1 and it behaves like we want, putting the backup into the same volume as SourceVol1.
It looks as if dpDynamicSecondarySizing is not working for some reason.
Yup. All the critera is met, but it still is attempting to provision a new volume for the qtree on a different aggregate in the resource pool, eventhough there is plenty of room in the aggregate where we want it to go. However, if we manually resize the volume in which we want this new QSM relationship to reside before we conform it, it puts it there upon conforming. There is no good reason to provision a new volume, like it wants to do. For some reason it will not resize automatically.
The other reason could be that when trying to increase(Resize the volume) the over commitment of the containing aggr can exceed which would prevent the volume from re-sizing.Adding some extra logging would tell why exactly its creating a new volume. Also what is the max fan-in ratio ? Would a webex be possible to find out the root cause ?
So, looking at the over commit thresholds, It does not appear that this is the issue. Also, we have observed behavior where we have 3 qtrees in a volume to snapmirror and DPM attempts to create 1 volume with 2 qtrees and another volume with 1 qtree all in the same aggregate on the destination. So it does not seem to point to a space issue on the aggregate per se. Do we have the logic for this provisioning proccess documented somewhere? Maybe we can divine from that where and why it is doing this.
Well, that's indeed a workaround. Nevertheless, our integrator is looking into it with NetApp and probably we are suffering from a known bug: http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=677951 (Documented Issue 677951 which is funny because it doesn't have any info) but it's not certain so we are now running diagnostics with DFM and Netapp will investigate. I'll keep this thread updated with the findings.