Is this a global option on the dfm server? Yes, it’s a global options. If i set it, does it apply to all OSSV host transfers? No. It applies only to those which hit this failure and not used always though. Regards adai
... View more
Hi, Pls check the following pulic report and rectify the problem if any. http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=365167 Else pls increase the retry using the following options. dfm option set dpMaxGetStatusRetries=30. Regards adai
... View more
Hi Estella, The master license key, for DFM is made $0 and is available at the below location http://now.netapp.com/NOW/knowledge/docs/olio/guides/dfm/license.shtml Note:This license should only be used in production and not for testing/eval or demo purpose. Regards adai
... View more
Check for the re-created relationship in the external relationship tab and try to import it. If the deleted relationship qtree is part of the dataset remove it from the dataset uisng dfpm dataset relinquish command. Then do a dfm host discover on the source and destination filer and wait for few miutes for the snapvault discovery to complete. Then import them back to dataset. Regards adai
... View more
Hi Low, Can you give more details like the verison of DFM server ? OSSV version ? As a matter of fact ontap doesnt support a snapshot name with slash (/). Also dfm does the following. Snapshot name will have ASCII alphabets, ASCII numbers, underscore '_', hyphen '-', plus sign '+' and a dot '.'. All other characters will be converted to 'x'. So it explains why you snapshot names are with xx. Regards adai
... View more
This is because your provisioning policy says it must be a raid_dp aggr, whereas the existing aggr is not meeting the policy. Either remove the raid_dp from the provisioning policy or add an aggr which meets the prov policy. Regards adai
... View more
Hi Soeren, So if pingmon fails, after the timeout and retry then it would generate a "host down event." In Case of FS Mon which does discovery, if it fails it will not update any new volumes/qtree/aggr existing on the filer. Similarly if it fails for 4 consecustive cycles then it will mark the volume as deleted in dfm db. Regards adai
... View more
Hi pete, The default value for all the MaxRel options is 50.Also there are other situation when we create a new volume. For others adding what I know based on my experience. So even if you have a Fan-In ration of 4 it happens only when the following conditions are satisfied Maxrelspersecondaryvolume is not exceeded. So in you primary volume you have 51 qtrees then PM will create 51 qtree snapvault relation which will exceed the max rels, so two destination volume will be created. PlatformDedupeLimit is not exceeded. Example, you alread have 2 destination volume becaue of the above scenario, now you add another volume to the same dataset primary, then PM will try to increase the size of the second secondary volume while doing that if it will exceed the max volume limit for dedupe enabled volume for a given platform, again it will create a 3rd secondary volume and not use the existing 2nd volume. Volume Language is same for the existing destination volume and the new primary volumes If the volume language of the 3rd primary volume is different from the volume language of the other two primary volume, then PM will create a 4th Secondary volume and not use the same volume. By this you will end up with 4 secondary volume to backup qtree coming from 4 primary volumes inspite of fan-in being 4. Hope this helps. Regards adai
... View more
Hi Sean, As of today there is not SPoG to provide centralized management.Pls reach out to the product managers for road map. Regards adai
... View more
Hi Michael, Once a dummy lun and vm are created there is no need for a cron job, OnCommand will take care of updating this relationship. regards adai
... View more
The provisioning manger issue is fixed in 4.0.1 if I remeber correctly, where as the DFM issue is still there, pls raise a case and add it to the burt # 526035 Regards adai
... View more
Hi There are two type of data stored in the DFM server. Operation Manager data (stored in sybase database) Performance Advisor data (stored in flat files) Operations Manager data is stored in two kinds of tables in the db, one is called non-historic and historic table.The point in time data, is used in generating alerts and shown in all reports and web ui using the non-historic table(except the graphs).Similarly all the graph shown in the web ui, show consolidated or historic data from the historic tables. By default they have 1d,1w,1m,3m and 1y.The data beyond 1y is not shown in webui but they can be accessed using the dfm graph cli.As a matter of fact, none of the data is purged from the database, esp., the history tables keeps growing and are never purged.So if data for a graph is required beyond 1 year, say 2 years then use the dfm graph cli with the graphname suffixed with 1y and specify the start and end time. Performance Advisor data is stored in flat files and they are purged based on the retention set for each counter group.The collection frequency, retention duration and the counter group that are enabled for each filer can be seen at the NMC-Performance Data->Setup-Host->Datacollection. Hope this helps and answers your question. Also below is a link to all TR and documents related to DFM in which there is one talking about capacity management and historic data. OnCommand(DFM) and its related Technical Reports Regards adai
... View more
Hi You dont need to license it on the storage system.The operation, protection and provisioning manger license on the Storage systems are not used. Its only license in dfm server.Just add you 14 character license key in the dfm machine. Regards adai
... View more
Hi Michael, dfpm dataset relinquish ATLFAS02:/vol/VMDS_TEST1_backup_CDCFAS01_vmds_test1_1/VMDS_TEST1_CDCFAS01_vmds_test1. And once I have done that, set it to ignored, can I then just do a snapvault stop to delete that relationship? Yes. Regards adai
... View more
Hi Michael, We update a relationship only if there is a backup version that contains the source qtree. In this case, the root qtree probably does not have any VMware objects so it is not part of any backupversions. Thats the reason it not getting update. The second behaviour of conformance is also expected, what i would suggest is relinquish the relationship from the dataset using the following cli. [root@lnx~]# dfpm dataset relinquish help NAME relinquish -- mark a relationship as external SYNOPSIS dfpm dataset relinquish { [ <destination-volume-name-or-id> ] | [ <destination-qtree-name-or-id> ] } DESCRIPTION The relationship will be marked as external. Source and destination objects are left unchanged. [root@lnx~]# After that to prevent it from creating relationship go to NMC and ignore the particular source qtree Regards adai
... View more
Hi Michael, Is the datastore you are looking for is reported under the datastore inventory and not in the Restore wizard. Or in both place ? If its both then it could be a monitoring problem. Regards adai
... View more
Are all the services running ? check the same using dfm service list cli. Also check for any error in log folder, under <installdir>/NetApp/DFM/Log Can you also get the job detail output for the jobs. dfpm job detail <jobid> Regards adai
... View more
Hi Rick, There is no way to consolidate two DFM db into.The only way is to add the filer from one instance to other and retire the former dfm server.Attached is the doc that can help you do this. Regards adai
... View more
Hi Chris, There is a report in DFM that does report on qtrees usage including soft quota.Below is the report on the same. C:\Documents andSettings\Administrator>dfm report view qtrees-available 134 Object IDQtree Storage Server Volume Used AvailableSoft Limit Disk Space Limit Available (%) --------- ---------------------- -------- ----- --------- ---------- ----------------------------- 134 cifsonly sim1 CifsOnly 12320 8160 5120 20480 39.8 Totals 12320 8160 5120 20480 39.8 C:\Documents andSettings\Administrator> Where Softlimit is soft quotaand diskspacelimit is the hard quota. screenshot of SM2.0 for verification. The following DFM event arerasied only when hard quotas are breached. [root@lnx186-118 ~]# dfmeventtype list | grep -i qtree.kbytes qtree-almost-full Warning qtree.kbytes qtree-full Error qtree.kbytes qtree-space-normal Normal qtree.kbytes [root@lnx186-118 ~]# Where as there is a way to generateevent or rather trap from filer when soft quota is exceeded as follows. In the filer set the DFM serveras trap host. sim1*> snmp contact: location: adaikkap-lxp authtrap: 1 init: 1 traphosts: 192.168.98.10 (192.168.98.10) <192.168.98.10>-----------This is my dfmserver ip community: ro public sim1*> Now I got the following events C:\Documents andSettings\Administrator>dfm report view snmp-traps-all 90 Event IDSeverity Trap Received Source ID Source Condition -------- ----------------------------------- ------------ --------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 255 NotificationquotaExceeded 25Aug 02:18 90 sim1 productSerialNum=987654-32-0productTrapData=Quota Event: status=exceeded,type=soft, volume=CifsOnly, limit_item=disk, limit_value=5120, treeid=1 254 Notification softQuotaExceeded 25 Aug 02:1890 sim1 productSerialNum=987654-32-0productTrapData=Soft block limit exceeded for tree1 on volume CifsOnly 253 NotificationquotaExceeded 25Aug 02:18 90 sim1 productSerialNum=987654-32-0productTrapData=QuotaEvent: status=exceeded, type=threshold, volume=CifsOnly, limit_item=disk,limit_value=4096, treeid=1 252 Notification softQuotaExceeded 25 Aug 02:1890 sim1 productSerialNum=987654-32-0productTrapData=Threshold exceeded for tree 1 onvolume CifsOnly 175 Critical shelfFault 25 Aug 01:56 90 sim1 productSerialNum=987654-32-0productTrapData=Enclosure services has detected anerror in access to shelves on channel v0. 173 Warning globalStatusNonCritical 25 Aug 01:5290 sim1 productSerialNum=987654-32-0miscGlobalStatusMessage=/vol/auto_grow_test is full(using or reserving 100% of space and 9% of inodes, using 100% of reserve). 172 Alert volumeFull 25 Aug 01:52 90 sim1 productSerialNum=987654-32-0productTrapData=/vol/auto_grow_test is full (usingor reserving 100% of space and 9% of inodes, using 100% of reserve). 171 NotificationvolumeOnline 25 Aug 01:52 90 sim1 productSerialNum=987654-32-0productTrapData=Volume aggr1 is online. 170 NotificationvolumeOnline 25 Aug 01:52 90 sim1 productSerialNum=987654-32-0productTrapData=Volume aggr0 is online. 168 NotificationlinkUp 25 Aug 01:52 90 sim1 productSerialNum=987654-32-0ifIndex.X=3 167 NotificationlinkUp 25 Aug 01:52 90 sim1 productSerialNum=987654-32-0ifIndex.X=1 166 Information coldStart 25 Aug 01:52 90 sim1 productSerialNum=987654-32-0 162 Critical shelfFault 04 Aug 23:56 90 sim1 productSerialNum=987654-32-0productTrapData=Enclosure services has detected anerror in access to shelves on channel v0. 161 Critical shelfFault 04 Aug 22:56 90 sim1 productSerialNum=987654-32-0productTrapData=Enclosure services has detected anerror in access to shelves on channel v0. 160 Critical shelfFault 04 Aug 21:56 90 sim1 productSerialNum=987654-32-0productTrapData=Enclosure services has detected anerror in access to shelves on channel v0. C:\Documents andSettings\Administrator> Hope this helps. Regards adai
... View more
Hi Richard, Are you getting any errors on the OnCommand Console ? Can you get the output of dfm options list | grep -i http from your DFM server ? Regards adai
... View more