Thank you both for the quick replies. In summary the NMC is used to work with Protection Manager & Provisioning Manager and the Web Interface is Operations Manager? correct and one more correction Performance Advisor also uses, NMC. Also all three use the same Sybase database, which is located where? Its generally under <installation dir>/NTAPdfm/data. I checked Computer Management and I dont see a sybase service that runs. As I already said, its a embedded db, you can find the sql service running using the cli dfm service list. [root@lnx ~]# dfm service list sql: started http: started eventd: not started monitor: not started scheduler: not started server: started watchdog: not started [root@lnx~]# Regards adai
... View more
They also installed Management Console Version 3.0 but i installed NetApp Operations Manager 4.0 but i am a bit confused. The 3.0 management console is still installed and i assume both are pointing to the same database and the console has been replaced with the web interface and i can remove Console 3.0? Don't remove Netapp Management Console(NMC).For all operation related to Provisioning you will need NMC, or the same can be done from the CLI, but not from the Web Interface. What type of database does Operation Manager use? I assumed some type of SQL Express ? Operation Manager use embedded database Sybase ASA 10 Operations, Protection, Provisioning we are licensed for and all three use the same interface which is either MMC console in 3.0 or web page in 4.0? For all operation related to operations manager like reporting , config mgmt, passwd mgmt, quota mgmt, you will need Web Interface or CLI. For all operation related to Provisioning Manager like dataset, provisioning, you will need NMC or CLI. Regards adai
... View more
Raise a case against Bug 439756 for your database backup problem and large snapshot space. Inorder for the snapshot based backup to work, perfdata ,script-plugin, and db dir needs to be either lun or local storage and not otherwise. The documentation for setting up snapshot based backup https://now.netapp.com/NOW/knowledge/docs/DFM_win/rel381/html/software/upgrade/install7.htm Regards adai
... View more
Can you give more information like what is the DFM version ? What is the ontap Version ? What is the ossv version ? Is it only the baseline job that's failing with this error or every update job ? Is it being backed up to a vfiler on the destinations ? Regards adai
... View more
Hi Adam, Operations Manger generate the following events, the license reaches limit. [root@lnx]# dfm eventtype list | grep -i license management-station:license-expired Error dfm.license.expiration management-station:license-nearly-expired Warning dfm.license.expiration management-station:license-not-expired Normal dfm.license.expiration management-station:node-limit-nearly-reached Warning dfm.license.limit management-station:node-limit-ok Normal dfm.license.limit management-station:node-limit-reached Error dfm.license.limit management-station:protmgr-node-limit-nearly-reached Warning dfm.protlicense.limit management-station:protmgr-node-limit-ok Normal dfm.protlicense.limit management-station:protmgr-node-limit-reached Error dfm.protlicense.limit management-station:provmgr-node-limit-nearly-reached Warning dfm.provlicense.limit management-station:provmgr-node-limit-ok Normal dfm.provlicense.limit management-station:provmgr-node-limit-reached Error dfm.provlicense.limit [root@lnx]# you cant stop the events by you can change/modify the severity of the event to information so that you dont find nagging messages. [root@lnx]# dfm eventtype modify help NAME modify -- SYNOPSIS dfm eventType modify -v <event-severity> <event-name> [root@lnx]# [root@lnx]# dfm eventtype modify -v Information management-station:node-limit-reached Modified event "management-station:node-limit-reached". [root@lnx]# dfm eventtype list management-station:node-limit-reached Event Name Severity Class -------------------------------------------------- ------------ ------------------ management-station:node-limit-reached Information dfm.license.limit [root@lnx]# Regards adai
... View more
Though you found the answers just for the convenience of others. Then resync the snapvault.--> in protection manager or on the console of the filer? In the console of the filer. Rewrite the FSID --> how can i do this? (ontap cli = console of the filer ,,, or dfm command ...) Ontap CLI. Regards adai
... View more
Pls do the following steps. Suspend the dataset,using dfpm dataset suspend <dsid/name> cli. Note the FS id of the volumes being moved. Then using traditional method VSM the volume from aggr1 to aggr3, quiesce and break the mirror. Then resync the snapvault( though i thing this would not be needed as they are in the same filer). After that rewrite the FSID of the migrated volume using ontap cli rewrite fsid. Now do a dfm host disover <filername> on the filer on which volumes are moved from aggr1 to 3. Resume the dataset using dfpm datsaet resume <dsid> Now things should work fine. Regards adai
... View more
Hi Fletch, Is this also true ? > - Volumes by same name as 'strdata', 'strweb2' already exist on the storage system 'irt-na03.'(119). In which case you may have to address this too to successfully complete the migration. Regards adai
... View more
Hi Flecth, I think, you got the root cause.Raise a case with NGS, to get around this issue. This is becasue of the bypass model check testpoint. Regards adai
... View more
I will have to take back, yes, throttling in PM is not dynamic, its a static one, and remains so until the job completes. It doesnt re-allocate the bandwith of the finshed volumes to still running ones. Regards adai
... View more
The throttle only dictates the speed or the bandwidth of a job based on the time at which it is triggered. Once a job is set a bandwidth based on the throttle time and its kb/s value, it's constant for the entire job until it gets completed.It does not vary even though it has crossed the throttle window. So using a throttle you can only say if a job starts between 8 AM and 5PM it must be set 300kb/s as the bandwidth. If a job that started @ 3:00 PM runs till 6:00PM does not change the throttle value to unlimited after 5PM as this job was triggered @ 3:00 PM during the throttle window. But if another job is triggered @ 5:05PM, it will be using unlimited bandwidth. Regards adai
... View more
What is the hardware model of irt-na03 ? Is it a FAS 20X0 or FAS 3050 ? then yes, only maximum 4 volumes are supported for vfiler migration including the root volume of the vfiler. Refer Section 7.2 Table six of TR 3814. http://media.netapp.com/documents/tr-3814.pdf Regards adai
... View more
Just wanted to know what's the version of DFM you are using ? Also were you consistently hitting this issue before you disabled dedupe ? You may now try to enable dedupe and once the volumes are provisioned by enabling the dedupe on the provisioning policy. Regards adai
... View more
>Is this something that is to be scripted outside of DFM ? or is it part of DFM? its part of DFM. >is it a different dataset type Yes.The ones in bold makes the difference. [root@oncommand ]# dfpm dataset list -x app_dataset Id: 433 Name: app_dataset Policy: Back up Description: test dataset Owner: adai Contact: adai Volume Qtree Name Prefix: DR Capable: No Application: SDU Version: 3.2 Server: red hat Requires Non Disruptive Restore: No Node details: Node Name: Primary data Resource Pools: Provisioning Policy: Time Zone: DR Capable: No vFiler: Node Name: Backup Resource Pools: RP3, RP4 Provisioning Policy: Dedup-Secondary Provision Time Zone: DR Capable: No vFiler: [root@oncommand ]# dfpm dataset list -x ossv_dataset Id: 415 Name: ossv_dataset Policy: Vaulting of OSSV Description: Owner: Contact: Volume Qtree Name Prefix: DR Capable: No Requires Non Disruptive Restore: No Node details: Node Name: Primary data Resource Pools: Provisioning Policy: Time Zone: DR Capable: No vFiler: Node Name: Backup Resource Pools: RP3 Provisioning Policy: Time Zone: DR Capable: No vFiler: [root@oncommand ]# Btw were you able to discover the relationships created outside in the external relationship tab ? Regards adai
... View more
I don't know how that works, raise a case for this and let know your support of this bug, and they should be able to do it for you. Regards adai.
... View more
This is becasue, those names are still in the dfm db, but marked deleted, so when you use the same name, you will hit it. Just run the following cli and let me know. dfm volume list -a | findstr /i <your volume name> dfm qtree list -a | findstr /i <your qtree name> As long as these two command show those names you will not be able to start from 1. The only way I can think off is to delete those from dfm db for which you can again raise a case with the NGS and ask them to cleanup for you. Raise a case and attach your call rec to bug 448480 Regards adai
... View more
destroy the dataset and create it new.And start the provisioing new. As long as the dataset remains, the counter keep increasing, there is noway to reset it. Regards adai
... View more
Can you post the details of the job, dfpm job detail <job id > ? Also how many concurrent provisioning job request did you make ? How many backup jobs where running if any. Regards adai
... View more
This is because in your case you would have used the same base snapshot on the VSM destination as the source snapshot for the SV. But when you do it using normal dataset, DFM(PM) will try to create snapshot on this volume which is readonly. This can be over come by using Application dataset, where PM doesn't take the snapshot rather it propagates the snapshot that is registered with it. So you will have to create an app dataset using the API or NMSDK and use the base snap shot of the VSM as the base snapshot for the sv. PM will take care of creating snapshot on the snapvault destination.
... View more