VMware Solutions Discussions

VMware SRM, SRA, & vFilers - configuration howto ?

JIM_SURLOW
7,424 Views

I'm trying to setup VMware SRM w/vFilers.  Can someone help me with the SRA wizard?  I know that the SRA needs to have knowledge of the recovery site's vFiler0 for the flexclone command, so I suspect that I use the vFiler0 for the management IP.  However, I'm stuck with 2 cases:

1. Use vFiler0 IPs for management IP - and the problem here is that snapmirror status on vFiler0 does not show the snapmirror relationship between the vFilers that I'm using (OnTap 7.3.6). 

2. Use the vFilers' IPs for management iP - and the problem here is that upon SRM testing, the SRA can't pass the vol clone command to the vfiler as that is a vFiler0 command. 

It seems that I should set the vfiler IP at the source side and the vFiler0 IP at the destination side.  But, since it doesn't seem to discover the snapmirror relationship, I'm stuck.

vSphere 5, SRA 2.0, OnTap 7.3.6

(at present, I'm just snapmirroring between each partner in the cluster for testing purposes).

Thanks,

1 ACCEPTED SOLUTION

JIM_SURLOW
7,424 Views

The TR was most helpful.  Configured SRA to point to the vFilers.  ipspace not an issue.

I ended up having to tear down the SRM config and re-configure the protection group. 

then I modified a file and restarted SRM - see http://communities.vmware.com/message/1890921?tstart=3075 for details, as others have hit this.

At this point, things are working.

View solution in original post

8 REPLIES 8

JIM_SURLOW
7,424 Views

discoverArrays.pl uses the subroutine getPeerNames to issues the snapmirror-get-status API call.  The fact that snapmirror status at vFiler0 doesn't show the snapmirror sessions that were created between the named vFilers, I think is a problem. 

Haven't gone to see which perl scripts are called for the test failover - but, then there is issue that the vol clone can't be called when it talks directly to the named vFiler.

JIM_SURLOW
7,424 Views

I should add that I'm using an ipspace for my vfilers.

glowe
7,424 Views

Please review page 26 of the SRA IAG.  It explains the vFiler requirements, including that the SnapMirror relationship is defined at the vFiler context and each vFiler is added as a separate controller. SRA does not leverage vfiler0 for any management of vFilers.  In addition, and perhaps the crux of your issue, you need to be sure this option is enabled to allow FlexCloning from the vFiler context:

options vfiler.vol_clone_zapi_allow on

JIM_SURLOW
7,424 Views

Here's the latest.

I thought the filer that I was working on had the complete bundle.  It did not, so I didn't have the FlexClone license.  I've installed a temp key and reviewed TR-4064 (http://www.netapp.com/us/system/pdf-reader.aspx?m=tr-4064.pdf&cc=us) which has been helpful.

I've pointed the SRA to the vFilers IP.  I still have an issue when I attempt to enable the array pair in that complains that SAN or NAS device not found on the destination side.

By searching for the VMware log entry error in the discoverDevices.pl, it seems that nfslist() should be called, which should call snapmirrorstatus_export which should execute a snapmirror-get-status API call.  But, I'm only seeing ifconfigs running from the API when I look at /etc/log/auditlog

If I issue a snapmirror status from within the vFiler, I see the relationship.  Given the IPs being used, the SRA would only know about the vFiler.  So it should see the relationship.

==========

# find out which discovered nfs exports are replicated too

            $result = nfslist();

            if ( !defined($result) ) {

....

sub nfslist() {

    #get list of replicated volumes

    #and compare with the nfs exports

    #to only return the replicated exports

    my $result = snapmirrorstatus_export();

    if(!defined($result)){

...

sub snapmirrorstatus_export() {

    my $replrelation =

      0;    #  value 1 indicates snapmirror status with matching peer exists

    my $isQsm;

    nfsexportlist();

#ZAPI ? snapmirror-destination-info ? gives snapmirror destination information,

#CLI ? "snapmirror status vol_name"

    my $zapi = "snapmirror-get-status";

    my $out = $server->invoke( $zapi, "location", "" );

JIM_SURLOW
7,425 Views

Adding -

User configured in SRA has full filer admin privs - haven't started implementing RBAC yet.  (api-* among others)

Apparently, not all API calls are logged in /etc/log/audit as I see some /etc/exports gathering and that doesn't toss an audit record.

JIM_SURLOW
7,425 Views

The TR is a better source of record to use for documentation.

After installing the FlexClone license, tearing down my config, rebuilding the config, correcting for the earlier misconfiguration of "use_ip_for_snapmirror_relation" in the ontap_config.txt file, I believe I'm past the NetApp config issues.  Recovery still does not function - hangs on 5. Prepare Protected Site VMs for Migration.

schrie
7,424 Views

I saw the update about the NetApp side appearing to function, but wanted to summarize and provide some detail to anyone else who might be hitting this problem in the future...

SRA 2.0.1 32bit is required currently for SRM 5.0, and 5.1 would require the 64bit, per the certifications provided by engineering in the IMT:

IMT: http://support.netapp.com/matrix/

Because of BURT http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=642115 (will be posted public shortly), you will need to list all replicated volumes in the "Volume Include List" for them to be visible as replicated devices.

Here's the Install and Admin Guide for SRA 2.0.1: https://support.netapp.com/documentation/docweb/index.html?productID=61507

Per the admin guide, here are the requirements for vFilers:

You must meet certain requirements so that Storage Replication Adapter can support vFiler units, such as adding each vFiler unit to the adapter as a separate array and ensuring that your storage system is running Data ONTAP 7.3.2 or later.

You must meet the following requirements for vFiler unit support:

• The storage system must be running Data ONTAP 7.3.2 or later.

• Each vFiler unit must be added to the adapter as a separate array.

• Both the source and destination vFiler units must be online.

• The httpd.admin.enable option on the vFiler unit must be enabled.

• SnapMirror relationships must be defined in the destination vFiler context and not in the destination physical vfiler0 context.

• iSCSI or NFS connected datastores must be supported for vFiler units.

• SCSI service must be started on the vFiler unit.  The adapter does not automatically start the iSCSI service.

• In a SAN environment, the recovery site ESX or ESXi hosts must have established iSCSI sessions to the recovery vFiler units before you perform the test recovery or recovery operation.

• A single virtual machine must not have data on multiple vFiler units.

• Before performing a test recovery operation on a vFiler unit, the vfiler.vol_clone_zapi_allow option must be turned on for vfiler0.

Note: SSL is not supported for vFiler units.

To workaround the DNS and snapmirror configuration issues, we've often used the hosts file on both SRM/SRA hosts, as well as all four NetApp controllers.  I've found that using the same /etc/hosts file on all hosts/controllers (like, copy/paste them and modifiy the 127.0.0.1 for the specific host you're on), is the easiest way to make sure everything can properly resolve.

Also, don't forget we have Cooperative Support available for cross vendor issues which warrant expertise from both NetApp and VMware (and Cisco when needed for a FlexPod issue) http://www.netapp.com/us/company/news/press-releases/news-rel-20100126-cisco-vmware.aspx - don't hesitate to contact us at NetApp Global Support for VMware SRM and NetApp SRA issues if you need a hand!

JIM_SURLOW
7,425 Views

The TR was most helpful.  Configured SRA to point to the vFilers.  ipspace not an issue.

I ended up having to tear down the SRM config and re-configure the protection group. 

then I modified a file and restarted SRM - see http://communities.vmware.com/message/1890921?tstart=3075 for details, as others have hit this.

At this point, things are working.

Public