We are trying to deploy OnCommand Unified Manager (DFM/Operations Manager) on existing filers and vFilers.
We do not find a way to make it working.
FilerP is the primary filer
vFilerS is the secondary vFiler hosted by Filer2
FilerT is the tertiary filer
There are existing cascaded Mirrors of volume /vol/testvol : FilerP ==> vFilerS ==> FilerT
We want to use OnCommand Unified Manager (DFM/Operations Manager) to manage this kind of policy.
Current SnapMirror relationships
SnapMirror.allow is set on all source (v)Filers
SnapMirror.conf on Filer2 (host of secondary vFiler)
SnapMirror.conf on vFilerS (secondary)
FilerP:testvol vFilerS:testvol - - - - -
SnapMirror.conf on FilerT (tertiary)
vFilerS:testvol FilerT:testvol - - - - -
SnapMirror status on Filers and vFiler is conform to SnapMirror.conf settings
Mirroring is working well
Therefore it looks like it is not possible to manage this configuration using OnCommand Unified Manager
In Operations Manager Administration Guide page 251 § “Prerequisites for monitoring vFiler units” it is written:
“Monitoring SnapMirror relationships
For hosting storage systems that are mirroring data to a secondary system, you must ensure that the secondary system is added to the vFiler group. The DataFabric Manager server collects details about vFiler unit SnapMirror relationships from the hosting storage system. The DataFabric Manager server displays the relationships if the destination vFiler unit is assigned to the vFiler group, even though the source vFiler unit is not assigned to the same group.”
Therefore, we performed several tests adding physical filers in accurate group in OnCommand Unified Manager ==> no change, relationships with vFilerS remain hidden.
In Multistore Management Guide page 51 § “Guidelines for using SnapMirror technology” it is written :
“When specifying a path name in the /etc/snapmirror.conf file, ensure that you use the storage system name, and not the vFiler unit name."
This means in our case, replace vFilerS with Filer2 (its hosting filer).
Therefore we tried change in snapmirror.conf of vFilerS after break on relationship.
snapmirror status still shows initial relationship
Source Destination State Lag Status
FilerP:testvol vFilerS:testvol Broken-off xxx:xx:xx Idle
Resync or update on the relationship results in error:
Source not set on command line or in /etc/snapmirror.conf file.
Then … we do not understand how to set SnapMirror relationships in mixed filer / vfiler environment to make OnCommand Unified Manager (DFM/Operations Manager) able to manage such configuration.
We will appreciate any help on following question:
Thanks a lot for your help.
I doubt in your case the snapmirror relationship are created using vfiler IP address and not the hosting or vfiler0 IP address. OCUM only discovers and manages vfiler relationships that are setup using hosting/vfiler0 IP and not vfiler IP. If you move your secondary from vfiler IP to physical filer IP OCUM should discover and you should be able to import them into a dataset.
Pls find the link to the kb article that describes this.
Also below are some more details.
PM supports and creates VSM/QSM/SV/OSSV relationship in all these topologies.
Filer -> Filer
Filer -> vFiler
vFiler -> Filer
vFiler -> vFiler
In all this cases PM creates the relationship and does the updates using filer interfaces (not the vfiler interfaces). Also PM discovers existing relationship of the above topology only if they are created using filer interfaces and not vfiler interfaces.
PM does not support vFiler DR + snapmanager product + protection manager due to Snapdrive for Windows and Unix not being able to handle two identically named resources returned by Protection Manager
Thanks a lot for your help.
Sorry for this late update, we performed several tests and had to change settings on existing production (v)filers.
Following your reply, I discovered a few other posts that clarified a lot our understanding of OC UM.
However we are still having a few questions about network settings for OC UM.
We are providing a multi-tenancy environment to several Tenants.
From the storage network view point the initial design defines a Tenant with followings:
Tenant IPspace = TEST
We want to keep a strong separation between Tenant networks.
Following requirements for external relationship discovery by OC UM, described in your reply, we moved the vlan interfaces from the Tenant dedicated IPspace to the default IPspace and we assigned IP to the hosting filer (we assumed that it is not necessary to assign one IP to each Tenant dedicated vFiler hosted by the filer; maybe we are wrong …).
Then the above example becomes
Tenant IPspace = TEST
As expected OC UM discovered the existing external relationship : great !
Now, as described in other posts, it is necessary to force OC UM to use Tenant dedicated backup network for SnapMirror data traffic between Tenant vFilers : use hostPreferredAddr1 and hostPreferredAddr2.
Therefore for above example we understand that setting can be the following.
Is it correct ?
To make our understanding better :
Thanks a lot for your help.
Hope you have been well
I'm attempting to move some volumes (containing CIFS share) between two aggregates on the same filer (using snapmirror). I've followed something along the lines of this process (https://communities.netapp.com/thread/7230).
It's all worked really well. Everything on the backend is basically exactly as it was before. This volume was a source for a replication to another datacentre (Controlled by DFM). On the backend, I've reconnected the new source volume (now on the aggregate I want it to be) to the original snapmirror destination in the other datacentre and that is happy. The old source volume (and DR relationship) has been removed from DFM. Old source volume has actually been destroyed. The plan is to let DFM see this new external relationship and I was going to import that into the original dataset (so it can look after the protection from then on). Well, it got pretty close to working as expected, except DFM sees the [new] source volume as being owned by the pFiler and not the vFiler. On the backend, both the new source and original destination volumes are owned by vFilers.
External Relationship discovered:
But when I check on the backend filer, Pulteney_Users_1 is definitely belonging to the vFiler "pultnetapp03v".
I can't import this relationship because the Dataset needs to the volume to belong to pultnetapp03v.
Well it turns out it wasn't from the production migration I mentioned in my post
The issue was actually created when I was playing around with a volume for testing the process. On the actual filer I was renaming volumes for testing the migration process (Several times, and using volumes names which used to exist previous - don't ask!), but didn't wait long enough for DFM host refresh periods. So the testing volume in DFM database ended up being owned by two vfilers (vfiler0 + another one we created). Due to this conflict, any subsequent modifications to the volume paths assigned to vfilers (in the DFM database would fail).
Example error from dfmserver.log
Aug 20 14:26:41 [dfmserver:ERROR]: [3212:0x3dcc]: Error in UPDATE vfStoragePaths SET vfId = 79, isetc = 0 , pathTimestamp = NOW() WHERE vfId = 8694 AND (spName = '/vol/TestingxQuota_Old' OR spName = '/vol/TestingxQuota' ) : (-193) [Sybase][ODBC Driver][SQL Anywhere]Primary key for table 'vfStoragePaths' is not unique : Primary key value ('79,NULL')
I found this post: https://forums.netapp.com/thread/44224
So I went and checked what could be found in the DFM.vfstoragePaths for these two volumes: /vol/TestingxQuota_Old and /vol/TestingxQuota
Looks like I found my duplicate. /vol/TestingxQuota_Old associated to host 79 and 8694.
>dfm query run "SELECT * FROM DFM.vfStoragePaths WHERE spName = '/vol/TestingxQuota_Old'"
These volumes no longer exist so I was going to remove them using a query similar to this:
dfm query run –f "DELETE FROM DFM.vfStoragePaths WHERE spName = '/vol/TestingxQuota_Old'"
dfm query run –f "DELETE FROM DFM.vfStoragePaths WHERE spName = '/vol/TestingxQuota'"
But I wanted to confirm with NetApp support first and it turns out we don't need to use database queries. We ended up doing this (Because I was happy for volumes having the conflict to be removed more DFM):
Find the deleted volume's objID (Using -a to show deleted volumes if needed!):
dfm volume list –a <hostID>
Shutdown DFM services, just start the SQL service:
dfm volume delete –f <ObjId>
Start the DFM services.
After a few minutes all the volumes I had owned by vfilers on the backend were correctly reporting in DFM too I could then import the external relationship into the right dataset. The good news is the process I followed with the production cutover work I mentioned above includes DFM host force refresh periods and didn't end up with conflicting volume ownerships.