ONTAP Discussions

Fabricpool and SVM-DR

Kiko
1,985 Views

Hi!

I would like to ask a question about Fabricpool and the behavior with an SVM-DR.
I have a FAS2650 with a CIFS service in production. On the other hand, a new A250 for production and a FAS2720 with SATA disk for volume tiering with FabricPool.
The scenario is the following:
The A250 storage does not have enough physical space to hold all the CIFS volumes of the FAS2650,

I have to migrate CIFS to the A250, I intend to do it using SVM-DR. At this point I have a question about the behavior of FabricPool.
If I configure a replication type SVM-DR for CIFS but I want that as the volumes are replicated from the FAS2650 to the A250, they go down to the FAS2720 storage through the ALL policy of FabricPool. Is this possible? Can FabricPool be enabled on this type of DP type volumes?

The idea is that as data is copied, in a minimum time, they go down to the secondary tier so as not to fill the main AFF storage.

Regards.

1 ACCEPTED SOLUTION

scottgelb
1,972 Views

Apologies if I mis-read the workflow...but sounds like the Source volumes are NOT using FabricPool tiering and  you want the destination to use the ALL policy. SVM-DR does not allow this, since the destination picks up the source volume setting. SnapMirror at the volume level will do this no problem, but SVM-DR will not. I just tested this out at a customer... we broke the SVM-DR mirror, enabled the ALL policy and that worked, but when we did a resync of the mirror at the SVM DR level, the policy changed back to none from ALL to match the source. Again, when using a volume to volume mirror (no SVM-DR), the policy none on the source and ALL not the destination works. The only other workaround I can think of is enable FabricPool tiering on the source that you want to match on the destination. The data will still rehydrate and mirror which will require two capacity bucket tiers, but that will get the result... or use volume mirrors and recreate all destination shares, exports, AD, etc.

View solution in original post

2 REPLIES 2

scottgelb
1,973 Views

Apologies if I mis-read the workflow...but sounds like the Source volumes are NOT using FabricPool tiering and  you want the destination to use the ALL policy. SVM-DR does not allow this, since the destination picks up the source volume setting. SnapMirror at the volume level will do this no problem, but SVM-DR will not. I just tested this out at a customer... we broke the SVM-DR mirror, enabled the ALL policy and that worked, but when we did a resync of the mirror at the SVM DR level, the policy changed back to none from ALL to match the source. Again, when using a volume to volume mirror (no SVM-DR), the policy none on the source and ALL not the destination works. The only other workaround I can think of is enable FabricPool tiering on the source that you want to match on the destination. The data will still rehydrate and mirror which will require two capacity bucket tiers, but that will get the result... or use volume mirrors and recreate all destination shares, exports, AD, etc.

Kiko
1,959 Views

Thanks for the explanation, I find it very interesting tests.

So, as an option, what I see as the most feasible is to recreate the Shares, permissions, etc... and copy the data with robocopy or similar from the origin without Fabricpool to the destination with Fabricpool so that in a new AFF, as they arrive, they go down to the FAS and thus free up space of the All Flash.

Best Regards

Public