ONTAP Discussions

Has any one of you have used secondary space management of DFM 4.0

adaikkap
4,062 Views

Has any one tried using the secondary space management feature of DFM 4.0?

Any feedback on the same ?

Regards

adai

6 REPLIES 6

hland
4,062 Views

Hi Adai,

I'm testing it in the lab right now. Really cool feature. I have a few questions though, maybe you could help me with these?

1.) I have a volume that is provisioned and managed by Protection Manager and I can choose to migrate it, However, it then tells me that it can't find any qualified destination aggregates. There is another aggregate on the same filer that is part of the same resource pool. It has more than enough available space. Unfortunately I haven't found a way to figure out why PM assumes that a particular aggeragte is not qualified. The help lists the following requirements:

  • Reside on a storage system that meets the necessary license requirements to support the protection policy.
  • Reside on a storage system that meets the secondary or tertiary storage provisioning policy requirements.
  • Reside on the same storage system as the source volume if the source volume is attached to a vFiler unit.
  • Have enough space to accommodate the migrated volume.

1 and 2 are fullfilled as both aggr are on the same filer, 3 doesn't apply as I haven't configured any vFiler, 4 is fullfilled as well. Are there any other checks that are done but not documented?

2.) Several people have independent dfm installations in the lab environment. I noticed that my installation allows to migrate volumes to a different aggegate, even though these are backup volumes managed by a completely different dfm installation. So it somehow seems to detect that these volumes have been created by a Protection Manager installation, while it does not allow to migrate volumes that have been created outside of dfm. How does the Secondary Space Management find out that these are PM created volumes?

3.) I have a customer that runs a large PM setup (managing >1000 snapmirror relationships). As the relationships already existed, we imported them in Protection Manager. Is there any way to use secondary space Management for these relationships? Or are they locked out from these new features forever, just because they used NetApp systems before Protection Manager was invented?

Thanks

Hendrik

adaikkap
4,062 Views

Hi Hendrik,

     Here you go with the answers,


I'm testing it in the lab right now. Really cool feature. I have a few questions though, maybe you could help me with these?

1.) I have a volume that is provisioned and managed by Protection Manager and I can choose to migrate it, However, it then tells me that it can't find any qualified destination aggregates. There is another aggregate on the same filer that is part of the same resource pool. It has more than enough available space. Unfortunately I haven't found a way to figure out why PM assumes that a particular aggeragte is not qualified. The help lists the following requirements:

  • Reside on a storage system that meets the necessary license requirements to support the protection policy.
  • #Reside on a storage system that meets the secondary or tertiary storage provisioning policy requirements.
  • #Reside on the same storage system as the source volume if the source volume is attached to a vFiler unit.
  • #Have enough space to accommodate the migrated volume.

1 and 2 are fullfilled as both aggr are on the same filer, 3 doesn't apply as I haven't configured any vFiler, 4 is fullfilled as well. Are there any other checks that are done but not documented?

Can you try for the same volume using the cli ? That will give you the exact reason why the destination aggr is not qualified.

dfpm migrate volume -D -d <destination-aggregate-name-or-id> <volume-name-or-id> [volume-name-or-id...]

Where

-D option specifies to display dry run results of volume migration. Migration will not be started.

-d option specifies the aggregate to which the volume(s) needs to be migrated.If not specified, a suitable aggregate will be selected.

volume-name-or-id is the volume to be migrated.

2.) Several people have independent dfm installations in the lab environment. I noticed that my installation allows to migrate volumes to a different aggegate, even though these are backup volumes managed by a completely different dfm installation. So it somehow seems to detect that these volumes have been created by a Protection Manager installation, while it does not allow to migrate volumes that have been created outside of dfm. How does the Secondary Space Management find out that these are PM created volumes?

There is a flag used internally as to find if the volume is part of the dataset and the relationship is managed by Protection Manager.

Below are the ground rules for migration.

Volume should not have any clinet facing protocols/exports, like FCP/iSCSI/NFS/CIFS

Volume should not be a parent of FlexClone.

Volume should not have un-managed data protection       (SV/QSM/VSM) relationships. In other words, the volume      should belong to secondary node of a dataset

3.) I have a customer that runs a large PM setup (managing >1000 snapmirror relationships). As the relationships already existed, we imported them in Protection Manager. Is there any way to use secondary space Management for these relationships? Or are they locked out from these new features forever, just because they used NetApp systems before Protection Manager was invented?

As said in the above reply once its part of a dataset even though the relationship was not created by PM its still eligible for SSM

Hope I clarifed all your doubts,

Do try and let me know your experience.

Regards

adai

reide
4,062 Views

Adai,

I have used this feature for SM, SV and OSSV datasets.  I *really* like this feature and I promote it heavily with customers. The one scenario that seems to resonate with everyone is when a secondary storage array goes off-maintenance or off-lease. Rather than having to manually re-locate hundreds or thousands of secondary volumes - and update their relationships - this clearly makes this much easier.  Just automatically migrate the secondary volumes from the old array to the new array, and then power-off the old array and ship it out the door. Customers like it and it demonstrates that a resource pool doesn't paint them into a corner.

I had a hiccup with hostname resolution that led to some SnapMirror issues, but I got that resolved.   I haven't tried planning multiple migrations at once, so I don't know how many migrations you could realistically do in a given amount of time.  Can migrations be done simultaneously or are they all done serially?

I am trying to use this feature as a regular part of my Protection Manager customer demo.

Reid

adaikkap
4,062 Views

>Can migrations be done simultaneously or are they all done serially?

Yes they can be done simultaneously.

You can migrate 10 volumes from the Mirror node of a dataset whose topology is like Backup then mirror, but not one from Mirror and Backup simultaneously.

regards

adai

emollonb2s
4,062 Views

Hi, i have the same issue with migrating volumes to another aggr it seems to be everything okay but the PM is saying "not qualified destination aggregates found".

I tried to use the comand CLI with this result

C:\Documents and Settings\Administrador.ARITEX>dfpm migrate volume -D -d FILER-BCK:aggr1 FILER-

Migration dry run results

-------------------------

Volume to migrate:      FILER-BCK:/CIFS2_filer_backup (975)

-------------------------------------------------

Do: Select a destination resource for migrating

Effect: Attention: Failed to select a resource.

Reason:

Storage system : 'FILER-BCK.aritex.local'(76):

     Aggregate : 'FILER-BCK:aggr1'(1073):

         - Aggregate FILER-BCK:aggr1 (1073) does not have double disk failure protection.

Suggestion:

I assume that the problem is the double disk failure protection??

Any suggestions?? thanks very much!!

adaikkap
4,062 Views

This is because your provisioning policy says it must be a raid_dp aggr, whereas the existing aggr is not meeting the policy.

Either remove the raid_dp from the provisioning policy or add an aggr which meets the prov policy.

Regards

adai

Public