Both OSSV and Secondary Filer needs to be added to OCUM and ndmp as well as credentials needs to be setup for the same. Once done run dfm host discover <hostname> for both. Also is the secondary created on a vfiler ? If so is it via vfiler interface or physical filer interface ? Regards adai
... View more
Hi Will, All you will have to do is this. Create an empty dataset Apply remote backup policy to the dataset. Now go to the External Relationship page and select the OSSV LREP relationship. Click on the IMPORT button and follow the wizard. For an existing relationship pls don't add them as resources to the dataset. Instead in case of existing relationships you one should always import using the import wizard. Regards adai
... View more
Hi Sheel, I did a complete test and found why there was no conflict of schedules. First let me explain my setup and test then throw more light on the same. Created a SnapVault Relationship between 2 filers using Backup Manager. Allowed the BM schedules to run for few time. Imported the same into a dataset Now the dataset schedule runs on the same. I also learned in this process how Backup Manager works. The schedules for BM managed relationships are maintained by dfm including the archive snapshot creation. But only the retention settings are written on the snapvault destination volume. mpo-vsim16> snapvault snap sched svSmDest create svSmDest 0@-@0 preserve=default create svSmDest dfm_sv_hourly 8@-@0 preserve=default,warn=0 mpo-vsim16> If you notice the schedules in the volume there is no schedule times or day instead only retention settings are written. If you notice typically on secondary a -xtranser schedules is created which will pull the changes from source, transfer the same and create an archive snapshot on secondary. Below is the snippet from the snapvault command reference manual. The third configuration step is to establish the SnapVault snapshot schedules on the primaries and the secondary with the snapvault snap sched command. A snapshot schedule in a volume creates and manages a series of snapshots with the same root name but a different extension such as sv.0, sv.1, sv.2, etc. (For snapshots on SnapLock secondary volumes, the extensions are representations of the date and time the snapshot was created rather than .0, .1, etc.). The primaries and secondary must have snapshot schedules with matching snapshot root names. On the secondary, the -x option to the snapvault snap sched command should be set to indicate that the secondary should transfer data from the primaries before creating the secondary snapshot. If -x is set, when the scheduled time arrives for the secondary to create its new sv.0 (or sv.yyyymmdd_hhmmss_zzz for SnapLock volumes) snapshot, the secondary updates each qtree in the volume from the sv.0 snapshot on the respective primary. Thus, the primaries and secondaries need snapshot schedules with the same base snapshot names. However, snapshot creation time and the number of snapshots preserved on the primary and secondary may be different. But in case of BM managed relationships the schedules are managed by BM, snapshot creation is done by BM after successful transfer, only the snapshot retention is delegated to ONTAP via the create schedules with out any schedule times in it Below are the details of the Relationship after imported to dataset. [root@vmlnx221-118 log]# dfpm dataset list importedSv Id Name Protection Policy Provisioning Policy Application Policy Storage Service ---------- --------------------------- --------------------------- ------------------- --------------------------- --------------- 999 importedSv Back up [root@vmlnx221-118 log]# dfpm dataset list -R importedSv Id Name Protection Policy Relationship Id State Status Hours Source Destination ---------- ------------- ------------------ --------------- ------- ------- ------- ----- ------- ---------------------------- 999 importedSv Back up 996 snapvaulted idle 1.2 mpo-vsim11:/svSrcNtn/qtOne mpo-vsim16:/svSmDest/qtOne [root@vmlnx221-118 log]# mpo-vsim16> snap list svSmDest Volume svSmDest working... %/used %/total date name ---------- ---------- ------------ -------- 22% (22%) 0% ( 0%) Jul 05 11:07 2013-07-05_2200+0530_hourly_importedSv_mpo-vsim16_svSmDest_.-.qtOne 42% (31%) 0% ( 0%) Jul 05 11:07 mpo-vsim16(4043456708)_svSmDest-base.1 (busy,snapvault) 58% (39%) 0% ( 0%) Jul 05 10:07 2013-07-05_2100+0530_hourly_importedSv_mpo-vsim16_svSmDest_.-.qtOne 67% (39%) 0% ( 0%) Jul 05 09:32 2013-07-05_2032+0530_weekly_importedSv_mpo-vsim16_svSmDest_.-.qtOne 73% (39%) 1% ( 0%) Jul 05 09:29 2013-07-05_2029+0530_hourly_importedSv_mpo-vsim16_svSmDest_.-.qtOne 77% (40%) 1% ( 0%) Jul 05 09:27 2013-07-05_2027+0530_daily_importedSv_mpo-vsim16_svSmDest_.-.qtOne 80% (39%) 1% ( 0%) Jul 04 12:02 dfm_sv_hourly.0 82% (39%) 1% ( 0%) Jul 04 11:02 dfm_sv_hourly.1 84% (39%) 1% ( 0%) Jul 04 10:02 dfm_sv_hourly.2 85% (39%) 1% ( 0%) Jul 04 09:02 dfm_sv_hourly.3 87% (39%) 1% ( 0%) Jul 04 08:02 dfm_sv_hourly.4 88% (39%) 1% ( 0%) Jul 04 07:02 dfm_sv_hourly.5 89% (39%) 2% ( 0%) Jul 04 06:02 dfm_sv_hourly.6 89% (38%) 2% ( 0%) Jul 04 05:02 dfm_sv_hourly.7 If you notice every BM schedule was skipped after the relationship was imported into the dataset. The only downside is that its written to a log file and not to any console. So any relationship managed by PM are skipped by BM. [root@vmlnx log]# cat dfbm.log Jul 04 15:00:14 [dfbm: WARN]: [8706:0x7fc93f6ea740]: ndmputil_svs_set_snap_sched: old reply (type 0x20500307) unfreed; freeing it. Jul 04 15:00:14 [dfbm:DEBUG]: [8706:0x7fc93f6ea740]: ndmputil_free_reply: freeing 0x4ae77a0 expected 0x4ae77d0 (type=0x20500306) Jul 05 00:00:40 [dfbm: INFO]: [26014:0x7ffdfd053740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 01:00:44 [dfbm: INFO]: [2965:0x7fc6ed348740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 02:00:36 [dfbm: INFO]: [11801:0x7f665dc31740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 03:00:46 [dfbm: INFO]: [20699:0x7f6f03bda740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 04:00:41 [dfbm: INFO]: [1581:0x7f6f97ef1740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 05:00:39 [dfbm: INFO]: [10655:0x7f703587f740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 06:00:40 [dfbm: INFO]: [19512:0x7fc1b89db740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 07:00:39 [dfbm: INFO]: [28382:0x7f477042c740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 08:00:38 [dfbm: INFO]: [5349:0x7f3a7f107740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 09:00:41 [dfbm: INFO]: [14098:0x7fa13ccce740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 10:00:40 [dfbm: INFO]: [22937:0x7f7cf1c20740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 11:00:40 [dfbm: INFO]: [31880:0x7fedfe471740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 12:00:44 [dfbm: INFO]: [8782:0x7f635e0d6740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 13:00:44 [dfbm: INFO]: [17489:0x7f994ca5f740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 14:00:39 [dfbm: INFO]: [26340:0x7f4061ee1740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 15:00:39 [dfbm: INFO]: [3261:0x7f5b893f8740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 16:00:41 [dfbm: INFO]: [12085:0x7f47fcca7740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 17:00:36 [dfbm: INFO]: [20952:0x7f6875da0740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 18:00:33 [dfbm: INFO]: [29775:0x7f0a0ebbc740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 19:00:37 [dfbm: INFO]: [6680:0x7fc419a53740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 20:00:45 [dfbm: INFO]: [15694:0x7fb94ae04740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 21:00:37 [dfbm: INFO]: [24730:0x7f6f7e567740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. Jul 05 22:00:37 [dfbm: INFO]: [1310:0x7f8c7566b740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager. [root@vmlnx log]# All in all its not necessary to remove the schedules from BM, but it makes the setup clean. Also its not recommended to remove the "create snapvault snap sched " from the secondary controllers as it will take care of retering the old snapshot. But since the retention is based on count and applies only to the snapshot with the same root names it will have any impact unless you make the retention to 0. Regards adai
... View more
When we restore there is no change in the permissions of the folders. If it was 600 before then even that server should have encountered that problem right ? Regards adai
... View more
Hi The general recommendation is to run NMC on box other than the server. Else there is resource contention between the server and NMC. One thing I can suggest is to install NMC within the same LAN and launch it from a jump host instead of your Desktop and see if you encounter the same. Regards adai
... View more
Hi Christian, I suggest you run this on a Linux rather than windows. Based on my experience with split at other customer places we found linux to be faster than windows. Regards adai
... View more
BTW Thomas, I suggest you wait for a week or so and upgrade to 5.2 which would be GA instead of 5.1. Also there are quite a few improvements in 5.2 Compared to 5.1 Regards adai
... View more
Hi Joyce Is perl installed on your DFM/OCUM server ? I tried to recreate the error but the only time I could do this was when perl was not installed. Once I installed perl, this error went away. Also couple of things I noticed which you can try and confirm if that helps. The path should only be till bin and the .exe part Since you installed perl after dfm/ocum can you stop and start dfm service and check if you still get this error. We had a similar issue in another case and this solved it. https://communities.netapp.com/message/112223 Regards adai
... View more
Hi Neils, Happy to know that the problem is now resolved. As you said, just starting the dfm service would have suffice. As I said earlier I could only replicate this error when perl was not installed. Also do a internal search I found that this could happen when perl is installed after the installation of dfm. Until version 3.8 of DFM or so we used to bundle perl along with the installation but from then on stopped doing so. BTW muhammad can you confirm if your is as well resolved ? Regards adai
... View more
HI Thomas, Is my understanding of your move correct as I updated in my earlier post ? If YES then the ISG test is not accurate and not applicable here. For example I had been doing this personally for another activity. A dfm backup from 4.0.2D12 running on linux restored it on a 5.1 DFM running on W2K8 by just copying the .ndb file and executing the cli dfm backup restore. Just think, is it possible to move the linux path to windows as per the text in the ISG ? I don't understand why it was written and in what context. But for sure I know its not applicable for your situation if my understanding was correct as per my summary earlier. This situation is applicable only when you restore the monitordb.db file and not the .ndb file. In such situation the sql will not start if you dont set the dbdirlog to same as what it was in the monitordb.db. But this is something internal and not supported or recommended for customers. Regards adai
... View more
Hi Mariko-san, Good to know that the issue got solved. But I would like to know how the permission of the dfm encyc key got changed ? Do you have any idea how this could have happended ? Regards adai
... View more
Hi Saran, The lun details are retrieved via API and it requires the credentials of the cluster. Have you set the cluster admin user name and credentials ? As arun said, run the dfm host diag <clustername> and see if the hostlogin etc are set. Regards adai
... View more
Hi Thomas, Let me see if got your situation correctly. You are currently running version 4.0.2D12 on server A. You would like to Install version 5.1 on server B and restore the db backup from server A on B. Is my understand correct ? If the above is correct, all you have to do is the following. Install OCUM 5.1, (though I would suggest you to go for 5.2 which will be GA in couple of weeks) in your preferred location in server B Take an archive backup of the DFM 4.0.2D12 on SeverA. Copy the backup taken in step 2 to <installdir>/DataFabric Manager/data directory of Server B Execute the cli dfm backup restore <backup filename> When you restore the db location of the server is taken care and the one on the backup file is ignored. The install and setup guide is incorrect, let me open a bug and get it corrected. BTW the dfm datastore setup will move things like, database, perf dir and script-plugin dir etc. Regards adai
... View more
Hi Niels Is perl installed on your DFM/OCUM server ? I tried to recreate the eror but the only time I could do this was when perl was not installed. The reason why SED is working is because its a exe and doesn't need an interpreter, unlike the script plugin that we gave which is written in perl. Once I installed perl, this error went away. Also couple of things I noticed which you can try and confirm if that helps. The path should only be till bin and the .exe part Since you installed perl after dfm/ocum can you stop and start dfm service and check if you still get this error. Regards adai
... View more
Hi Muhammad, I tried to reproduce this and only time I was able to do was when perl was not installed. I got the same error as you. Execution of 'perl C:\Program Files (x86)\Netapp\DataFabric Manager\DFM\script-plugins\PM_Extractor\PM_Extractor.pl' failed. Reason: The system cannot find the file specified Once I installed perl, this error went away. Also couple of things I noticed which you can try and confirm if that helps. The path should only be till bin and the .exe part Since you installed perl after dfm/ocum can you stop and start dfm service and check if you still get this error. After trying the 2 things if you still hit the issue pls let us know. Regards adai
... View more
Hi Mariko-san, This looks like some issue with the encryption keys. I suggest you open a case with netapp support. You can also try what peter said, but that is more with respect to certificates and not keys. also you can you paste the error message from error.log ? Regards adai
... View more
Hi Mariko-san, Hi, adai. Thanks for the info. I have setup SNMP traps in the original DFM so I must need to modify that part. Yes if required, due to change of IP address or hostname of the DFM server. I have an additional question. If I migrate the DB from the original DFM to the new one, LDAP info in the original DFM will be imported to the new DFM? I use the same LDAP server for user authenticaion. To setup the LDPA authentication, I have to get an LDAP server admin to setup it in the DFM. If I can import it, no need to bother him. He is a nice guy but if I don't need to ask him, I don't want to. You don't need to do anything new unless there is connectivity between the new DFM server and your LDAP server. Regards adai
... View more
Hi Mariko-san, I suggest you rather install version 5.2RC1 on server B, as it would be a GA candidate in couple of weeks. You would need to change/update the following if the server B's Hostname and IP address will be different than server A and if you have configured the following SnapManagers Integration with Protection Manager SNMP trap hosts configured in ONTAP/Filer Third party trap host configure to receive SNMP traps from OCUM. Regards adai
... View more
Hi Marc, This is more of an ONTAP issue. I suggest you open a case against this bug and support should be able to help you. Sorry that I couldnt help you on this. Regards adai
... View more
Hi Nikita, We don't need ndmp for provisioning volume, but we need it for all backup or mirror related operations like, snapmirror or snapvault create, initialize update etc. Regards adai
... View more
Hi, This is not possible, as in case of Mirror node of the dataset its an entire volume, Which Is the source of SV and here you cant pick and choose qtree or exclude any qtrees. Its all or nothing. Regards adai
... View more
Hi Igal, So all you do is VSM and no QSM or SV right ? We already have a script that generates this report using SDK let me see if we can bundle it as Script Plugin. Pls ping me back in a week or two regarding the same. Regards adai
... View more
Hi Chris, I spoke to one of the long timer of Performance Advisor on the same. The XML of <charts/> is correct. It is same as <charts>value</charts> Also the SDK documentation is incorrect. Charts is not a optional element. Regards adai
... View more
Hi The new name for Data Fabric Manager is OnCommand Unified Manager. The old managers like, ops, protection and provisioning are called capabilities now. Operations Manager is now called Operations Capability Protection Manager is now called Protection Capability Provisioning Manager is now called Provisioning Capability Performance Advisor is now called Performance Capability The DFM or data fabric manager is no more the prodcut name but its still used to refer the server as DFM server which is the central piece. Hope this helps. Regards adai
... View more