Failing that, can I import or migrate linux DFM to windows DFM without losing the data? Yes, Take a backup of dfm using the command dfm backup create in the linux server. Restore this backup on the new Windows sever where you have installed dfm, using dfm backup restore. Regards adai
... View more
Moving volumes using Vfiler migration feature (of DFM/Provisioning Manager 3.8) and DataMotion of Ontap 7.3.3 with Provisioning Manager 4.0 you can migrate you volumes seamlessly and at the same time have the volume history transferred, in other words preserved. But the above requires the volume to be owned by a Vfiler hosted on a Filer running Data Ontap 7.3.1 for Offline Vfiler Migration and 7.3.3 for Data Motion/Online Vfiler migration. Another way is to use secondary storage migration feature of Protection Manager 4.0 and migrate the volume, this only requires Async Snapmirror on the filers between which the volumes are moved and the volume being migrated should not have any client facing protocols. When SSM is used, volume history data is preserved. Regards adai
... View more
Hi Emanuel Trying to remember rules of engagement ... - If you remove a destination volume ( for whatever reason ) and recrated it ... can you "reconform" that relationship? Yes, if you add it to the Backup node and the volume satisfies all the conformance checks like, 1.32 times the source volume size, Inode number is below the threshold. volume used space is below the vol full or nearly full threshold. The containing Aggr of the volumes is below full or nearly full threshold. Trying to remember rules of engagement ... - Is there a way to repair a broken relationship? Not as such in Protection Manager.You can use the filer snapvault commands to resync or restart and import the relationship into a dataset in PM. Regards adai
... View more
Getting to implementation of Protection Manager 4.0 and trying some use-cases different questions came up: - doing a restore from a local or secondary backup there is always a file created named: “restore_symboltable”. This wouldn’t be too bad if this file wouldn’t be of a size of 4 MB(!) even if you restore a file of 4 KB(!). Second: This file will grow in 4 MB (!) incremental every time you do additional restores to this destination. How can this file be avoided? What is this file good for? This is Behavior of Data Ontap. http://now.netapp.com/NOW/knowledge/docs/ontap/rel705/html/ontap/tapebkup/recove25.htm http://now.netapp.com/NOW/knowledge/docs/ontap/rel705/html/ontap/cmdref/man1/na_restore.1.html There have been few request to delete the same after restore.Yet it is not targeted to any release,pls get your sales folk to add your case to the request. - there is a new space provisioning for the secondary in Protection Manager 4.0 which will automatically resize the secondary storage: does this relate to Aggregat-level too? E.g. Primary site Aggregat will be enlarged with several disks and the volume to be vaulted will be enlarged in nearly the same size than the growth of the Aggregat. Will Protection Manager or maybe Provisioning Manager enlarge the secondary Aggregat if there are several spare disks available? We dont add disk to aggrs to enlarge it.Protection Manager/Provisioning Manager only resizes volume. Or will there be an error-message for enlargement of the secondary volume cause there is not enough space in the Aggregat left? Yes, saying there is not enough space to resize the volume as containing aggr is full and there will also be suggestion to add disk to the aggr. - having a look at the secondary volume all snapshots related to Protection Manager are busy. Does this have any negative impact on the systems? How big will this impact be? Is this a “normal state” for snapshots managed through PM? Only the SnapVault or Snapmirror snapshot would be busy, can you get the output of the snaplist from the filer ? - could you please explain the following space-allocation and reporting seen in PM: Having a secondary volume created through PM with several Qtrees vaulted to this volume the secondary volume will be created with no space guarantee (very good) but showing about 50% used space (why?). Having a look at OM this isn’t the case: the volume is only as big as data is moved to this destination 2% (which is the right behavior). Why is Protection Manager showing nearly 50% used capacity? Where in Protection Manager are you looking at, which is showing 50% used capacity. Regards adai
... View more
Yes.There is no single report as such today. But there is a RFE for the same. IIRC kjag did a script which will correlate these two data and give you one single report, it was posted in NTN. Let me find it and update the same. Regards adai
... View more
Yes.When the conformance ran, it detected that there are older snapshot than the retention settings of the policy and marked it for deletion. Regards adai
... View more
You do make reference to a backup version table. Where is that located and is it editable? Its a internal database.No Would that be a way to change the snap name, and maybe to replace a reference that's been deleted? If thats required, raise a support ticket and engineering will fix it. On DFM versions, any thought to adding a feature to rename a snapshot via DFM or being able to do annual backups with multi-year retention times? Yes. regards adai
... View more
can you try using cli and get the error if any. dfm host set <host name or id> hostNdmpLogin=<username> hostNdmpPassword=<password> then do a dfm host diag <hostname-or-id> to see what is the ndmp status. Regards adai
... View more
Yes. AFAIR when you change the retention count or duration it applies for all backups and not those that gets created after that. To be sure of, I tested it and found that my memory is right. Regards adai
... View more
What if I wanted commas in my output without resulting to Excel? This will result is a clean prepped report for auto delivery. However, client can easily add commas. This is just being picky right now. Not possible today.Will bring it as request for next release. Regards adai
... View more
This can be done even with dfm report modify, after creation of reports. Not necessary to be done @ the time of report creation. Regards adai
... View more
Hi Robinson, For documentation, please look at the Technical Report "Access to DataFabric Manager and Performance Advisor Data Using Database Access and Data Export" at the below link. http://media.netapp.com/documents/tr-3690.pdf In the TR, you can look at Section 3.5 - Case A: Case A: Storage administrator wants to access directly the DataFabric Manager database views from third-party reporting tools such as Crystal Report to generate customized storage capacity reports. If trying to setup a DSN from a windows system other than the Operations Manager server system, the ODBC driver files need to be copied to the server from OM server. On 3.7/3.7.1, the following three files copied from "<dfm-install-dir>\Sybase\ASA\win32" dbodbc9.dll dblgen9.dll dbcon9.dll Follow the instructions in these two pages: ODBC driver required files - http://www.ianywhere.com/developer/product_manuals/sqlanywhere/0902/en/html/dbpgen9/00000657.htm Configuring the ODBC driver - http://www.ianywhere.com/developer/product_manuals/sqlanywhere/0902/en/html/dbpgen9/00000658.htm On 3.8/3.8.1, the following three files copied from "<dfm-install-dir>\Sybase\ASA\win32" dbodbc10.dll dblgen10.dll dbcon10.dll Follow the instructions in these two pages: ODBC driver required files - http://www.ianywhere.com/developer/product_manuals/sqlanywhere/1000/en/html/dbpgen10/pg-odbc-driver-deploy.html Configuring the ODBC driver - http://www.ianywhere.com/developer/product_manuals/sqlanywhere/1001/en/html/dbpgen10/pg-configuring-driver-client-deploy.html Then configure the DSN using the installed driver. The documentation above will give you directions for the ASA driver. Regards adai
... View more
What is the 'dfm report modify' syntax for reporting everything is GB? I also noticed there is no consistency with significant figures. I guess once the report is generated Excel can be used to round off figures for a clean report. dfm report modify helps give you its options and it also take other options of dfm report create like thr format qualifier and precision. A field name in the report take the following format. field-name[ :format-qualifier[ .precision][ =pretty-name ] Example: Volume.overwriterate:MB.4="Vol OWR" I created a report with report create with the following. dfm report create -R volume -f Volume.name,Volume.overwriterate:B.2,Volume.used:MB.3 vol_pres [root@lnx186-118 log]# dfm report view vol_pres | more Volume Name Volume Overwrite Rate (B) Volume Used Capacity (MB) ----------------------------------------------------- ------------------------- ------------------------- aaa_root 352256 0.316 abhi_vol5_NEW 268288 0.121 abhi_vol6 268288 0.125 abhi_vol7 531456 401.957 abhi_volume 289792 0.148 Adai_Dont_Delete 3618988032 224039.789 aggroc 1219584 1.203 The above was the output. Latter I wanted to change the volume name to volume full name and overwriterate to MB and precision to 4, volume used to GB. dfm report modify -f Volume.fullname Volume.overwriterate:MB.4 Volume.used:GB.1 -d modify_precision -n changed_vol_pres vol_pres Below is the report output. [root@lnx186-118 log]# dfm report view changed_vol_pres | more Volume Full Name Volume Overwrite Rate (MB) Volume Used Capacity (GB) ------------------------------------------------------------------- -------------------------- ------------------------- Abhi_filer:/vfiler_rootvol 1.6025 0.0 Avatar:/Avatar 2.4111 0.0 Avatar:/Avatar_root 0.8008 0.0 backup_ds:/backup_ds_3 1.2393 0.3 backup_ds:/backup_ds_root 0.8867 0.0 backupvfiler:/backupvfiler_root 0.7129 0.0 custBurtVFiler:/snapvault_srcvolume_burt_memory_fix_SVR1 0.4512 0.0 Secondly, I've failed to find a comprehensive command reference manual for DFM. Even with the help argument I find allot of guess work is needed to work through the syntax. Is there anything out there? Access the man pages from the cli or from the webui link as follows. Control Center->Help->General Help->Contents->Man Pages Regards adai
... View more
I did not see any messages on scripts ( my customer is using some scripts in their environment ) and i want to make sure that upgrading three of their OM servers ( running 3.7 and 3.7.1 ) will be okay to proceed with? Are you talking about Script plugins ? If so they are taken care of in upgrades. Or is it about scripts that are using CLI you are taking about ? , then all dfm cli changes are backward compatible. Speaking of upgrades ... I do not know anything about "(Data Source Name DSN) entry for Adaptive Server Anywhere 9.0" ... this was an upgrade gotcha; what exactly is this? NetApp text - "If you are upgrading from DataFabric Manager 3.7 or earlier to DataFabric Manager 3.8, you must delete the existing Data Source Name DSN) entry for Adaptive Server Anywhere 9.0 driver, and create a new DSN entry for SQL Anywhere 10." Is that a DFM application or server host element i need to modify. This is basically the JDBC/ODBC connection made to the dfm database view to access the dfm history data, using third party reporting engines like crystal reports. Regards adai
... View more
No changes to provisioning of destination volumes for VSM relationships in 4.0. It will always create aggr sized volume and of the Resource Pool used for provisioning the VSM destination. Also note even though it creates aggr sized volumes after snapmirror (VSM) the destination volumes takes the size of the source volume. Though vol size command show the actual size of the volume in your case 8.1 TB. But df on the destination volume will show the size of source volume. Also aggr over commitment calculation for the destination aggr is done on the df size and and not the vol size. Another options is as suggested by dmilani to use the hidden options to restrict the size. Regards adai
... View more
The Answer is no.One possible way is access the reports from the reports archive directory, and have the location of the same to a web server instead of the default location. [root@lnx ~]# dfm options list reportsArchiveDir Option Value ----------------- ------------------------------ reportsArchiveDir /opt/NTAPdfm/reports/ [root@lnx ~]# There are few customers cribbing about the same.There are plans to change it. Regards adai
... View more
1) does that mean value set in pmSecondaryMaxVolSizeMb will be the size of secondary volume which it will create or it will create secondary mirror volume on the basis of primary volume? Yes.The value set in the option would be the size of the secondary volume and not on the basis of the primary volume. The secondary question is not valid as its not applicable. This options basically sets the max value for the secondary volumes instead of PM using the size of the aggr where it is trying to provisioning a secondary volume. Regards adai
... View more
Enabled the option for dynamic sizing which is currently disabled in your case. Also set the maxfan-in value to a value larger than 1 if you wish to backup more than one primary volume to a secondary volume. Regards adai
... View more
Only Ops-Mgr and PA data are exposed via Views, BackupManager, Disaster Recovery Manager, Protection Manager and Provisioning Manger data are not exposed through views. What exactly are you looking for in the backup jobs ? Regards adai
... View more
You can find the Ops-Mgr Schema in the Ops-Mgr UI General Help->Database Schema. I have given a example link replace Ops-Mgr with your Ops-Mgr server IP address or FDQN. http://Ops-Mgr:8080/help/dfm.htm#>>3Ecmd=1>>pan=2 Regards adai
... View more