Active IQ Unified Manager Discussions

How to setup SnapMirror relationships in vFiler to make them manageable by OnCommand Unified Manager?

alain_boyer
6,832 Views

Hello,

We are trying to deploy OnCommand Unified Manager (DFM/Operations Manager) on existing filers and vFilers.

We do not find a way to make it working.

Our environment

FilerP is the primary filer

vFilerS is the secondary vFiler hosted by Filer2

FilerT is the tertiary filer

There are existing cascaded Mirrors of volume /vol/testvol : FilerP ==> vFilerS ==> FilerT

We want to use OnCommand Unified Manager (DFM/Operations Manager) to manage this kind of policy.

Current SnapMirror relationships

SnapMirror.allow is set on all source (v)Filers

SnapMirror.conf on Filer2 (host of secondary vFiler)

nothing

SnapMirror.conf on vFilerS (secondary)

FilerP:testvol vFilerS:testvol - - - - -

SnapMirror.conf on FilerT (tertiary)

vFilerS:testvol FilerT:testvol - - - - -

SnapMirror status on Filers and vFiler is conform to SnapMirror.conf settings

Mirroring is working well

However:

  • SnapMirror relationships are not visible in system manager on Filer2 (host of secondary vFiler) : this may be the expected behavior
  • OnCommand Unified Manager does not detect data paths nor relationships to/from vFilerS

Therefore it looks like it is not possible to manage this configuration using OnCommand Unified Manager

In Operations Manager Administration Guide page 251 § “Prerequisites for monitoring vFiler units” it is written:

“Monitoring SnapMirror relationships

For hosting storage systems that are mirroring data to a secondary system, you must ensure that the secondary system is added to the vFiler group. The DataFabric Manager server collects details about vFiler unit SnapMirror relationships from the hosting storage system. The DataFabric Manager server displays the relationships if the destination vFiler unit is assigned to the vFiler group, even though the source vFiler unit is not assigned to the same group.”

Therefore, we performed several tests adding physical filers in accurate group in OnCommand Unified Manager ==> no change, relationships with vFilerS remain hidden.

In Multistore Management Guide page 51 § “Guidelines for using SnapMirror technology” it is written :

“When specifying a path name in the /etc/snapmirror.conf file, ensure that you use the storage system name, and not the vFiler unit name."

This means in our case, replace vFilerS with Filer2 (its hosting filer).

Therefore we tried change in snapmirror.conf of vFilerS after break on relationship.

Line changed

  • from : FilerP:testvol vFilerS:testvol - - - - -
  • to : FilerP:testvol Filer2:testvol - - - - -

snapmirror status still shows initial relationship

Source          Destination     State           Lag       Status

FilerP:testvol  vFilerS:testvol Broken-off      xxx:xx:xx Idle

Resync or update on the relationship results in error:

Source not set on command line or in /etc/snapmirror.conf file.

Then … we do not understand how to set SnapMirror relationships in mixed filer / vfiler environment to make OnCommand Unified Manager (DFM/Operations Manager) able to manage such configuration.


We will appreciate any help on following question:

  • How to configure snapmirror relationship in vFiler : use vFiler name or hosting filer name ?
  • How to configure snapmirror relationship in vFiler to make them manageable from OnCommand Unified Manager (DFM/Operations Manager) ?

Thanks a lot for your help.

4 REPLIES 4

adaikkap
6,832 Views

Hi Alain,

     I doubt in your case the snapmirror relationship are created using vfiler IP address and not the hosting or vfiler0 IP address. OCUM only discovers and manages vfiler relationships that are setup using hosting/vfiler0 IP and not vfiler IP. If you move your secondary from vfiler IP to physical filer IP OCUM should discover and you should be able to import them into a dataset.

Pls find the link to the kb article that describes this.

https://kb.netapp.com/support/index?page=content&id=3013156

Also below are some more details.

PM supports and creates VSM/QSM/SV/OSSV relationship in all these topologies.

Filer -> Filer

Filer -> vFiler

vFiler -> Filer

vFiler -> vFiler

OSSV->vFiler

In all this cases PM creates the relationship and does the updates using filer interfaces (not the vfiler interfaces). Also PM discovers existing relationship of the above topology only if they are created using filer interfaces and not vfiler interfaces.

  • PM creates and manages SnapMirror/SnapVault relationships that are from physical Filer to physical Filer only
  • It will not discover a relationship that originates or ends on a vFiler. However, PM can handle primary or secondary volumes that belong to a vFiler.
  • In a storage service, you can use a pre-existing secondary vFiler or you can ask OnCommand to create a secondary vFiler from a template (new in OC UM 5.0). But, OC UM will create a relationship from physical Filer to physical Filer even for the volumes that belong to a vFiler.
  • PM does not officially support vFiler DR but there is a plug in available to support this that was created outside of the PM team. see here https://communities.netapp.com/groups/nmc-vfiler-plugin?view=documents

PM does not support vFiler DR + snapmanager product + protection manager due to Snapdrive for Windows and Unix not being able to handle two identically named resources returned by Protection Manager

Regards

adai

alain_boyer
6,833 Views

Hi Adaikkappan,

Thanks a lot for your help.

Sorry for this late update, we performed several tests and had to change settings on existing production (v)filers.

Following your reply, I discovered a few other posts that clarified a lot our understanding of OC UM.

However we are still having a few questions about network settings for OC UM.

We are providing a multi-tenancy environment to several Tenants.

From the storage network view point the initial design defines a Tenant with followings:

  • A dedicated IPspace,
  • Several dedicated vFilers on primary storage (FAS3240)
  • One dedicated vFiler on secondary storage (FAS2040)
  • A dedicated restricted management network (primary interface on vFiler)
  • A dedicated backup network restricted to handle backup traffic (OSSV, SnapVault, SnapMirror)
  • At least one service network that provides storage services to Tenant (CIFS, NFS)
  • Dedicated vlan interface in each above networks
  • IP addresses for each vFiler in each above networks

For instance:

Tenant IPspace = TEST

Hosting filerFILER1FILER1FILER3FILER1FILER3
vFilerVFILER001VFILER002VFILER005
StoragePrimaryPrimarySecondaryPrimarySecondary
Backup interfaceifgr10-101ifgr10-101Ifgr20-101
IPspace BackupTESTTESTTEST
IP Backupx.x.101.11x.x.101.12x.x.101.15
Management interfaceifgr10-100ifgr10-100Ifgr20-100
IPspace ManagementTESTTESTTEST
IP Managementx.x.100.11x.x.100.12x.x.100.15
Service interfaceifgr10-102ifgr10-102Ifgr20-102
IPspace Service TESTTESTTEST
IP Servicex.x.102.11x.x.102.12x.x.102.15

We want to keep a strong separation between Tenant networks.

This means:

  • As far as possible, keep tenant dedicated networks/vlans/interfaces in the tenant dedicated IPspace
  • At least, keep all tenant traffic in tenant dedicated networks/vlans/interfaces

Following requirements for external relationship discovery by OC UM, described in your reply, we moved the vlan interfaces from the Tenant dedicated IPspace to the default IPspace and we assigned IP to the hosting filer (we assumed that it is not necessary to assign one IP to each Tenant dedicated vFiler hosted by the filer; maybe we are wrong …).

Then the above example becomes

Tenant IPspace = TEST

Hosting filerFILER1FILER1FILER3FILER1FILER3
vFilerVFILER001VFILER002VFILER005
StoragePrimaryPrimarySecondaryPrimarySecondary
Backup interface ifgr10-101Ifgr20-101
IPspace Backup DefaultDefault
IP Backup x.x.101.51x.x.101.55
Management interfaceifgr10-100ifgr10-100Ifgr20-100
IPspace ManagementTESTTESTTEST
IP Managementx.x.100.11x.x.100.12x.x.100.15
Service interfaceifgr10-102ifgr10-102Ifgr20-102
IPspace Service TESTTESTTEST
IP Servicex.x.102.11x.x.102.12x.x.102.15

As expected OC UM discovered the existing external relationship : great !

Now, as described in other posts, it is necessary to force OC UM to use Tenant dedicated backup network for SnapMirror data traffic between Tenant vFilers : use hostPreferredAddr1 and hostPreferredAddr2.

Therefore for above example we understand that setting can be the following.

FilerVFILER001VFILER002VFILER005FILER1FILER3
hostPreferredAddr1x.x.101.51x.x.101.51x.x.101.55nonenone
hostPreferredAddr2nonenonenonenonenone

Is it correct ?

To make our understanding better :

  • Is it better/recommended/mandatory to assign a single IP address per Tenant to hosting filer (as we did in above example) or to assign an IP address per Tenant vFiler (2 IPs on FILER1 in above example) ?
  • hostPrefferredAddrx are used for SnapMirror (VSM, QSM) relationships. Is it working also for SnapVault relationships ?

Thanks a lot for your help.

andrew_braker
6,832 Views

Hi Adai,

Hope you have been well

I'm attempting to move some volumes (containing CIFS share) between two aggregates on the same filer (using snapmirror). I've followed something along the lines of this process (https://communities.netapp.com/thread/7230).

It's all worked really well. Everything on the backend is basically exactly as it was before. This volume was a source for a replication to another datacentre (Controlled by DFM). On the backend, I've reconnected the new source volume (now on the aggregate I want it to be) to the original snapmirror destination in the other datacentre and that is happy. The old source volume (and DR relationship) has been removed from DFM. Old source volume has actually been destroyed. The plan is to let DFM see this new external relationship and I was going to import that into the original dataset (so it can look after the protection from then on). Well, it got pretty close to working as expected, except DFM sees the [new] source volume as being owned by the pFiler and not the vFiler. On the backend, both the new source and original destination volumes are owned by vFilers.

External Relationship discovered:

Source: pultnetapp01b:/Pulteney_Users_1

Destination: pultnetapp03v-dr:/Pulteney_Users_mirror_pultnetapp01b_Pulteney_Users_1

But when I check on the backend filer, Pulteney_Users_1 is definitely belonging to the vFiler "pultnetapp03v".

I can't import this relationship because the Dataset needs to the volume to belong to pultnetapp03v.

Any ideas?

Thanks

andrew_braker
6,833 Views

Well it turns out it wasn't from the production migration I mentioned in my post

The issue was actually created when I was playing around with a volume for testing the process. On the actual filer I was renaming volumes for testing the migration process (Several times, and using volumes names which used to exist previous - don't ask!), but didn't wait long enough for DFM host refresh periods. So the testing volume in DFM database ended up being owned by two vfilers (vfiler0 + another one we created). Due to this conflict, any subsequent modifications to the volume paths assigned to vfilers (in the DFM database would fail).

Example error from dfmserver.log

Aug 20 14:26:41 [dfmserver:ERROR]: [3212:0x3dcc]: Error in  UPDATE vfStoragePaths  SET vfId = 79, isetc = 0 , pathTimestamp = NOW()  WHERE vfId = 8694  AND (spName = '/vol/TestingxQuota_Old'  OR spName = '/vol/TestingxQuota' ) : (-193) [Sybase][ODBC Driver][SQL Anywhere]Primary key for table 'vfStoragePaths' is not unique : Primary key value ('79,NULL')

I found this post: https://forums.netapp.com/thread/44224


So I went and checked what could be found in the DFM.vfstoragePaths for these two volumes: /vol/TestingxQuota_Old and /vol/TestingxQuota

Looks like I found my duplicate. /vol/TestingxQuota_Old associated to host 79 and 8694.

>dfm query run "SELECT * FROM DFM.vfStoragePaths WHERE spName = '/vol/TestingxQuota_Old'"

"vfId","spName","isetc","objId","pathTimestamp"
"8694","/vol/TestingxQuota_Old","0","152056","2013-08-06 13:21:07.000000"
"79","/vol/TestingxQuota_Old","0","152056","2013-08-06 13:21:07.000000"

These volumes no longer exist so I was going to remove them using a query similar to this:

dfm query run –f "DELETE FROM DFM.vfStoragePaths WHERE spName = '/vol/TestingxQuota_Old'"
dfm query run –f "DELETE FROM DFM.vfStoragePaths WHERE spName = '/vol/TestingxQuota'"

But I wanted to confirm with NetApp support first and it turns out we don't need to use database queries. We ended up doing this (Because I was happy for volumes having the conflict to be removed more DFM):

Find the deleted volume's objID (Using -a to show deleted volumes if needed!):
dfm volume list –a <hostID>


Shutdown DFM services, just start the SQL service:
dfm volume delete –f <ObjId>

Start the DFM services.

After a few minutes all the volumes I had owned by vfilers on the backend were correctly reporting in DFM too I could then import the external relationship into the right dataset. The good news is the process I followed with the production cutover work I mentioned above includes DFM host force refresh periods and didn't end up with conflicting volume ownerships.

Public