Hi Guenter, You are right, as of today yes DFM/OnCommand does not monitor or report on disk scrubbing. This will definitely be a welcome addition to the monitoring and discovery capability of NetApp Storage Controllers in our Monitoring Tools. Regards adai
... View more
Hi Tim. Can you give us some more info like what is the version of DFM that you are running ? What is the platform OS version and flavor. Can you also check your sybase.log and see if there are any errors under your <install dir>/NetApp/DFM/Log/ ? Regards adai
... View more
Hi Richard, Are you using Performance Adviser 4.0 or latter which support client stats ? But AFAIK PA does not collect client stats more than 90/60 seconds.Also have you enabled automated client stats collection in PA/DFM ? If so what is the version of your DFM. Regards adai
... View more
Hi Scott, Your perfectly explained the problem. We understood and know what the problem is and also have a workaround. What version of DFM are you using ? I recommend you to upgrade to 5.0.1 if you aren't already. If you still want to continue in 4.0.2X then at-least upgrade to 4.0.2D12. You are a hitting bug 591117.Can you pls raise a case and against this bug ? This bug is not fixed in either of the release I recommended. But there is a workaround. Pls use the work around suggested by reid to get things going. But the only caveat is that if you have both 32 and 64 bit aggr in your 8.1 then following may happen. "By implementing this work around we are enabling the volume provisioning to be restricted only based on the single <value> set for both 32bit and 64bit aggregates. Hence if you like to provision a 32bit dedupe volume by setting the 64bit resource limit value the volume will be provisioned with 64 bit limit which may make the volume not suitable for De-duplication" Here is the public report for the same. http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=591117 Regards adai
... View more
Hi Scott, For the volume name reuse, I would like to see is a check box or an option in the provisioning policy that will allow re-use of volume names. I can think of a couple use cases in my environment where I would want to reuse a volume name to keep it sequential order. At the same time, i have use cases where i would not want to reuse the name... applying this at the provisioning policy level would help with this level of granularity an option... and save effort on crafting a post-provisioning script or renaming through the CLI. This is a very good suggestion.I have added this to the bug that fixed it in OnCommand 5.1 lets wait and watch if there would be any action on this in the future releases. As shiva said, in 5.1 its a global option and not a protection policy/dataset level option. Also, on a side note but still in line with ProvMgr is that 'maxQtreesPerVolume' options.... I think it would be fantastic to allow this option to be configurable through the provisioning policy and setup per dataset rather than a global setting.... there are specific use cases where I may want the default 15 qtrees (or less) in a vol but there are other times I may want 50 (or more).... being able to define this per-dataset would be fantastic This options is just like any other volume full threshold options.Which is at global and individual volume level too. The default value for this options is 15 and is a hidden option. If you would like to tune this at global level do the following. [root@ ~]# dfm options set maxQtreesPerVolume=10 Changed maximum qtrees per volume to 10. [root@ ~]# [root@~]# dfm options list |grep -i maxQtreesPerVolume maxQtreesPerVolume 10 [root@~]# If you want to have a different value than the global value of 10 ( in my case as i changed it) for a specific dataset you can set it for that dataset as follows. [root@ ~]# dfpm dataset set 6410 maxQtreesPerVolume=2 Changed maxQtreesPerVolume for 6410 to 2. [root@ ~]# [root@vmlnx208-161 ~]# dfpm dataset get 6410 Maximum qtrees that can be provisioned out of a volume: 2 Allow custom volume settings on provisioned volumes: No Enable periodic write guarantee checks on SAN datasets: Yes [root@vmlnx208-161 ~]# Regards adai
... View more
Hi, I have two suggestions for you. Pls upgrade to OnCommand 5.0.1 which is the GA candidate of 5.0 FCS release and has some of the regression and other burt fixes of OnCommand 5.0. The custom report in the new reporting engine of OnCommand 5.0/5.0.1 is empty when you start. There is video which gives a detailed example of how to use the new reporting engine and how to create custom view. In shot you pick a detailed view, apply your filter do all your customization and save it as a custom view which is totally different from the old reporting engine way of creating custom views. Reporting with OnCommand 5.0 Regards adai
... View more
Hi Jerome, Now I remember your environment. We customized it for Protection Manger Only Operations and during that time we disabled cfMonIntervel. From the above list except for dfm option set cfMonInterval= all others can be disabled. As they have nothing to do with cluster status. Regards adai
... View more
Hi Aravind, Pls check this FAQ which will help you. https://library.netapp.com/ecmdocs/ECMM1278650/html/faq/index.shtml#_9.10 Regards adai
... View more
By Protection Manager does exactly what you showed. But the default value for this options is 1. If you like to FAN-In Multple primary volumes into one single qtree pls change the following option to the value you like. [root@vmlnx ~]# dfm options list | grep -i fan dpMaxFanInRatio 1 [root@vmlnx ~]# BTW this options is applicable only for SV and QSM and not for VSM as its always 1:1. As chris said we already do this by default for OSSV, where we accommodate upto 50 primary directories/drives in 1 secondary volume irrespective of host from which they originate. The is controller by a option which can be increased or decreased. Regards adai
... View more
Hi Scott, I confirmed the behavior. What you are expecting is available in OC 5.1 which is currently in BETA. And expected soon. Regards adai
... View more
Hi As you guessed it turns out to be a bug from SYBASE. Below is the link for the same. http://search.sybase.com/kbx/changerequests?bug_id=694479 Regards adai
... View more
Hi Scott, Good to see you after a long time in the community. You should have been really busy. As of today it does not do it. But we have made a change in the upcoming release namely OC 5.1 to accommodate old volume names that are in the dfm db and not in filer. Let me check the behavior and get back to you on this. Regards adai
... View more
Hi Mauro, Only one setting can be applied for each of the following per dataset. Irrespective of howmany members the dataset contains. You can have different setting for member with a same dataset. If that want you wish you should group them in a different dataset. You can only have one setting for the following in a dataset for all its members. 1 Local Backup Schedule 1 Remote Backup Schedule 1 Throttle 1 Local Backup Retention 1 Remote Backup Retention. 1 Protection Policy For example say you have 2 volume in a dataset. For these two volume you can only have 1 retention on primary and different retention on secondary. But within the primary you cant have different retention for 1 volume and another for the other volume. Same way they will also have the same local and remote backup schedule. Though the schedule time for local and remote backup can be different but with local backup you cant have different schedule for volume 1 and volume2. Hope this helps. Regards adai
... View more
Hi Aravind, OnCommand Core Package is the next version of DFM 4.0. Both OnCommand and DFM still use the DFM server which is the discovery/monitoring/alerting engine. A database backup ( the file which ends as *.ndb) contains the following directory inside it. Data ,Perf Data & Scripts-Plugin. Data Dir contains the embedded sybase database namely monitordb.db and monitordb.log files. Perf Data Dir contains the performance advisor flat file for each host and trend file. Scripts-Plugin Dir contains the scripts its outputs and other associated files. All configuration that you have done to your old dfm is always stored in one of these directories. In most cases inside the db. Hope this helps you to upgrade to OnCommand 5.0.1 with out any Concerns. Regards adai
... View more
Hi Scott, Can you give some more info like, what flavor or linux ? How much memory and CPU Is the memory reserved or allocated ? Also can you check the jetty.log under the log folder for any errors ? Regards adai
... View more
Hi Rick, Thanks for the details. BTW AFAIK, plugins are not required for PA capability. Only the config managment capability for Filer and vFiler require plugins to mange the options on the controller and some config files. Regards adai
... View more
Hi Thomas, Response to your 1st Question: Its working as expected. The behavior you are expecting is coming in version 5.1. If you would like to take a sneak peak pls signup for the BETA. Below is the link to the same. Welcome to OnCommand 5.1 BETA Program The exact requirement is met there "Eliminate the requirement to include original directory path when restoring to an alternate location" Response to your 2nd Question: Yes, as you know in dfm we only discover NetApp Storage objects and not directories. And yes you will have to manually enter your location if you want to restore in directory inside a qtree. Regards adai
... View more
Hi Rick, Can you try turning on the following options ? perfAdvisorShowAllViews Disabled perfAdvisorShowDiagCounters Disabled Regards adai
... View more
Hi Thomas, Did you try using the NMC to restore ? Backup Manager is End of Support and is not the place to use for restore of snapshot taken by PM ( though it should work). I tried the same and hit the error you mentioned while using Backup Manager web UI. Pasted below is the screenshot. But when I try to restore the same using NMC for the same snapshot i dont hit an error. I Regards adai
... View more