Hi Francesco, What problem is it causing in your environment. This is internal to OnCommand, this should not affect any of the functionality pls let us know if its otherwise. Also as kjag said, its not used in any reports or in calculation. regards adai
... View more
Does your FQDN name contain the word opsmgr or dfm ? Can you try to access the OnCommand console using IP address instead ? http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=541326 Regards adai
... View more
Hi Todd, For old backup versions you should go to the old dataset. As a backup version is meta data attached to the dataset and not to the volumes of the dataset.The old dataset will take care of retiring or expiring the snapshots based on the retention settings.The "CONSEQUENCES OF DELETING A DATASET" explains this and that's why we recommend not to delete the old dataset. You will have to retain the old dataset for all restores from previous backup version or until they expire. Or as always you can use the snapshot which still exist on the volume to restore them. Regards adai
... View more
Hi KK, As you said, in-order to import the relationship you will have to relinquish both the source and destination. Also you a secondary volume cannot be a member of more than one dataset. But a primary volume can be. Pls follow the exact procedure to move the relationships. Regards adai
... View more
As Erik,mentioned you are facing bug 479828. Pls add an /etc/hosts entry in your filers as a work around. This bug is also fixed in upcoming GA release of 5.0 namely 5.0.1 Regards adai
... View more
Hi Austin. Refer to this KB article on "How to run SnapVault over a non-primary interface in the Protection Manager" https://kb.netapp.com/support/index?page=content&id=1010493 And enable set the following option dfm option set ndmpDataUseAllInterfaces=1 Regards adai
... View more
If you create an app dataset it would do named snapshot transfer from the SM destination. If you create a normal dataset it would simply do a transfer the SM destination snapshot. Try the Mirror then backup policy of protection manger. Reagards adai
... View more
No. Its not. Volume Quota overcommitment is due to the oversubscription of qtree quotas inside the volume.Lets take the example of 10 GB volume Now inside that volume if you create a 5 qtrees with 3GB quota then the volume is over committed by 5GB or 50% due to the quota since the volume itself is only 10GB. Where as Aggregate Overcommitment is due to the oversubscription of volumes in that aggr. Lets again take an example of 100GB aggr. Now inside that aggr if you create 5 volume of 10GB then the aggr is not overcommitted.But if you create 5 volume of 25 GB each ( This can only be done if the volume guarantee is set to none) then the aggr is overcommited by 25 GB for 25%. Hope this helps. Regards adai
... View more
HI Gireesh, Are you sure that the below cli will work ? "dfm graph -s 21772800 -e -11404800 -D '%a, %d %b %Y %H:%M:%S' volume-usage-vs-total G1" Since the data you are trying to access is in the past you will have to use the appropriate suffix in the graph name. Since this data is more than 3m old, one can get it only from year history graph. The suffix -1y needs to be added to the graph name as follows volume-usage-vs-total-1y in the cli to really work. The help clearly says what suffix to use depending upon the date range. Use a suffix afer the <graph-name> to select a different range of valid dates; use suffix -1d (the default) for data going back 24-48 hours -1w for data going back 1-2 weeks -1m for data going back 1-2 months -3m for data going back 3-6 months -1y for data going back indefinitely Regards adai
... View more
Hi Pasting the dfm graph cli help. Hope this helps you construct the cli. [root@ ~]# dfm graph help NAME graph -- create graphs of data over time SYNOPSIS dfm graph [ <options> ] <graph-name> <name-or-id-to-graph> DESCRIPTION The graph command generates data over time for a particular item in the database. Use 'dfm graph' with no arguments to get the list of graphs. The options are -s <start-date> -e <end-date> -D <date-format> -F <output-format> -h <height> -w <width> The <date-format> is a time format string as defined by the strftime library routine. The default date format is the one appropriate for your locale. The <start-date> is the number of seconds in the past that the graph should start; the <end-date> is the number of seconds in the future that the graph should end. Use a negative value for <end-date> if the graph should stop in the past. The <height> and <width> are the height and width of the graph image in pixels. These options are applicable to image formats only i.e. when <output-format> is specified as png or gif. Use a suffix afer the <graph-name> to select a different range of valid dates; use suffix -1d (the default) for data going back 24-48 hours -1w for data going back 1-2 weeks -1m for data going back 1-2 months -3m for data going back 3-6 months -1y for data going back indefinitely The values above are ranges because the amount of data still available depends on whether the current time is near the beginning or the end of a day, week, month, or quarter. For example, to see the last 60 days of CPU usage for a particular storage system, use the "-3m" suffix, because "-1m" may not have all the data you request. The command is $ dfm graph -s 5184000 -e 0 cpu-3m system1 The <output-format> is the type of format in which the output will be generated. Supported output formats are text, html, csv, xls, png and gif. The output for html, png and gif is in binary format and needs to be redirected to a file with proper extension to open it. For example, to see the output of the graph 'volume-usage' in gif format, issue the command like below $ dfm graph -F gif volume-usage > graph.gif [root@ ~]# Pls let me know if you still have difficult in getting the data you want. Regards adai
... View more
Protection capability of OnCommand 5.0 does not support mirroring between 32bit and 64 bit volumes though ONTAP 8.1 7-Mode. So you will not be able to migrate a volume from 32 to 64 bit using the secondary space management feature of Protection Manager. This the support for cross mirroring is being developed in the next version of OnCommand which is being built. As of today PM does not create or discover or import a mirror relationship between 32 and 64 bit volumes. Though homogenous mirror of 32 to 32 and 64 to 64 is supported in On Command 5.0 Regards adai PS: Pls post question related to OnCommand products in the OnCommandMgmtSoftware Community to get early responses.
... View more
I agree todd, BUT the dataset info is tied to a DFM server. And managing the same members from two dfm server will cause dueling effect.If you are bringing up a new instance, just importing the members from the external relationship into the dataset would be fine provided you scrap the older instance. But if you want both to exisit then you will have to do one of the two things. 1.Destory the dataset in older instance. or 2.Reliquish the relatinship and leave the dataset. The 1st would reap the relationship within 2hours , where as the 2nd approach is better because it will mark the relationships as external and take care of deleting all the old snapshot as per the retention so that you dont lose any space on the filer. Regards adai
... View more
Operations Manager does not report on the Lun space usage. As a lun is a space reserved file. Its best to get the same from host side or by having an agent doing a walk on the LUN using FSRM or Host Agent. OM only reports on the total size of the lun. Regards adai
... View more
You can either upgrade to 5.0D1 or turn on the following options and live with the version you are dfm options set ndmpDataUseAllInterfaces=yes.
... View more
You must have protection manager license/BCO license enabled in order to discover snapvault relationship. Also OM only discovers Qtree to Qtree snapvault relationship that are created using the physical filer/vfiler0 IP address. Are the credentials for the source and destination filer been set ? Regards adai
... View more
Hi Todd, You are true, a dataset is tied to a DFM server. The way to move a datatset from one dfm server to another is by relinquish the relationship in server 1 and import those relationship on server 2. The BPG details on moving a relationship out of data, which is same when moving from one server to another. https://kb.netapp.com/support/index?page=content&id=1013426 5.12 GRACEFULLY RETIRING RELATIONSHIPS FROM ONCOMMAND PROTECTION TOOL In order to retire relationships (or to move relationships out of a dataset), administrators delete their datasets along with the dpReaperCleanupModer option set to Never. This is not a healthy option. You need to follow these steps to retire a relationship gracefully from a dataset: Relinquish the relationship – This will enable PM to mark the relationship as external Use the dfpm reliniquish command Remove the Primary Volume/Qtree from the dataset – Do not remove the secondary volume first as the conformance will trigger a new rebaseline. Remove the Secondary Volume from the dataset. Delete the dataset (if required) Note: It is true that you can delete the dataset with dpreapercleanupmode set to “never” to avoid the deletion of relationships, but it needs to stay that way forever, if re-activated PM will try to reap the relationships. CONSEQUENCES OF DELETING A DATASET One really needs to consider the possible consequences of deleting a dataset, as the operation cannot be undone. Deleting a dataset not only affects the relationships (through dpReaperCleanUp mode), but you will end up losing the backup versions. The backup versions are index of snapshots of members in a dataset. These backup versions are responsible for restore operations and snapshot expiration based on retention time and retention count.If the backup versions are lost (due to the deletion of a dataset) then you will end up having orphaned snapshots that cannot be restored through OnCommand Protection tool and will never be expired occupying huge amounts of space over time So follow 5.12, dont delete the dataset on server 1. Add the controllers src and dst to server 2 and import the relationship in server 2 into a newly created dataset. Regards adai
... View more
Hi, The report are all point in time and show the last sample value. If you like to see the space used for last year use the graph and use the custom date ranges like 1d,1w,1m etc. If you want to use a specific data range use the dfm graph cli and specify the date. If you need more help pls take a look at this doc. Storage Capacity Management using OnCommand Operations Manager If you want to know a list of all doc and TR for OnCommand/DFM pls refer the link below. OnCommand(DFM) and its related Technical Reports Regards adai
... View more
HI Stephane, Why do you want to delete it manually ? Isnt the policy retention setting doing the job of deletion ? You can make the snapshot getting automatically deleted by reducing the retention setting in the policy of a dataset. So that the next conformance run can delete them. [root@~]# dfpm policy node get -q 58 nodeId=1 nodeName=Primary data hourlyRetentionCount=2 hourlyRetentionDuration=86400 dailyRetentionCount=2 dailyRetentionDuration=604800 weeklyRetentionCount=1 weeklyRetentionDuration=1209600 monthlyRetentionCount=0 monthlyRetentionDuration=0 backupScriptPath= backupScriptRunAs= failoverScriptPath= failoverScriptRunAs= snapshotScheduleId=43 snapshotScheduleName=Sunday at midnight with daily and hourly lagWarningEnabled=Yes lagWarningThreshold=129600 lagErrorEnabled=Yes lagErrorThreshold=172800 nodeId=2 nodeName=Backup hourlyRetentionCount=0 hourlyRetentionDuration=0 dailyRetentionCount=2 dailyRetentionDuration=1209600 weeklyRetentionCount=2 weeklyRetentionDuration=4838400 monthlyRetentionCount=1 monthlyRetentionDuration=8467200 [root@~]# Or by running the following cli. [root@ ~]# dfpm dataset snapshot delete help NAME delete -- delete snapshots of volumes in a dataset SYNOPSIS dfpm dataset snapshot delete [ - D ] <dataset-name-or-id> <volume-name-or-id> <snapshot-name-or-id> [ <snapshot-name-or-id> ... ] DESCRIPTION Delete snapshots of a volume member of a dataset. If -D option is specified, only dry run results will be displayed. No changes will be made to the dataset. [root@ ~]# To list the snapshot use the following. [root@ ~]# dfpm dataset snapshot list help NAME list -- list snapshots for a particular object SYNOPSIS dfpm dataset snapshot list [ <object-name-or-id> ] DESCRIPTION list snapshots for a particular object. object-name-or-id can be either a volume, aggregate, storage system, vFiler unit, dataset. If object-name-or-id is not specified, then all snapshots are listed. [root@ ~]# If you know the volume names on which the snapshot are to be deleted you can use the dfm run cmd to run commands on the filer. Regards adai
... View more
Hi Joshu, The corresponding cli option for the UI you have shown is below. [root@]# dfm host set filer1 perfAdvisorTransport=junk Error: perfAdvisorTransport: junk must be "httpOnly", "httpsOk", or "Disabled". PErfDataexport is for exporting the performance data to third party db or as csv. Regards adai
... View more
What specific version of Apache does the customer wants to use ? As pete said we don't support any apache that is not bundled. Regards adai
... View more