Subscribe

vSphere SRM + SRA 2.0 igroups

When failing over between sites with SRM 5.1 and SRA 2.0, the SRA creates new initiator groups in the DR site for mapping failed over RDMs and Datastores. It will do this even if a suitable igroup with all appropriate initiators already exists.

For DR this is not so much of an issue, however when re-protecting and failing back the same thing occurs in the live site. A new initiator group is created and the failed back datastores and RDMs are mapped to it. While I appreciate this may be necessary in a true DR scenario; during controlled failvoers this gets messy from a management perspective and it would be good if the SRA either noticed that a suitable igroup existed and used that or allowed some control over lun mapping.

Is this something that may feature in a future release?

Re: vSphere SRM + SRA 2.0 igroups

You can call into support or open a case on the Support site and request a feature request (RFE Burt).  Document with as much detail as possible and we will get the request to engineering.  It could be a long period of time before you hear back on this.

Re: vSphere SRM + SRA 2.0 igroups

Hello,

Did you ever hear back on this?  I am seeing the same issue while testing SRM.

Thanks!

Re: vSphere SRM + SRA 2.0 igroups

No I didn't, and looks like it hasn't changed in later 2.1 release of the SRA either (I've yet to test with this version):

https://communities.netapp.com/message/134178#134178

Re: vSphere SRM + SRA 2.0 igroups

Thanks for the info. I’ve continued to test and I found there were some old initiators in my main igroup. After cleaning it up so the igroup matched exactly the hosts in my cluster, it worked okay. A full recovery of a test VM on a test lun/volume to DR and back to Core just finished with the existing igroup I wanted, instead of creating a new igroup.

We are using the latest versions of VMWare & SRM, and Ontap 8.2P4 7-mode.

Thanks again!

Re: vSphere SRM + SRA 2.0 igroups

Hi. I'm seeing the same behaviour of SRM creating it's own igroup on failback. Not sure exactly I understand why this happens. What did you do to ensure the original igroup stays intact on failback?