Hi Rajesh, As far as I remember misaligned lun are listed only in PA and not in Operations Manager. The PA data is stored in flat files and not in the database. Regards adai
... View more
Hi Nathan, You will get the following event, when snmp is not responding. [root@vmlnx ~]# dfm eventtype list | grep -i snmp host-snmp-not-responding Warning host.snmp_status host-snmp-ok Normal host.snmp_status [root@vmlnx ~]# Regards adai
... View more
Hi Raj, This is no supported or recommended.Can you give us more details on the list of fields that you are looking, for which need access to database. Regards adai
... View more
Hi All, I have SC 3.6.0 running on RHEL 5.6 and looking for a startup script to get scAgent and scServer autostarted on reboot of the rhel boxes. Today I have to manually run the commands to start the scServer and scAgent. Does anyone have the startup scripts that i can readily use ? Regards adai
... View more
Hi Rick, As I said, earlier 100000 files is the limit and FSRM cant report more than 100000 though we could walk all the files. Regards adai
... View more
The default number of files listed in all srm reports is 20. But the same can be modified by setting the value to a higher number. But the max number of files listed in srm reports, irrespective of the values is 100000. The options start with srm. Do a dfm options list and grep for srm. Regards Adai
... View more
DFM can also forward trap from controller if you have set dfm server as the traphost in your controllers. And send those traps to HPOV using alarms created for traps. Regards adai
... View more
Hi Yes it can be, but one needs to understand the nuances before changing them and they are global options and affect all secondary volume provisioning. Regards adai
... View more
But I dont know if you are aware of this. The volume provisioned by PM are none guarantee and wouldn't take up space from the aggr right away. Either way i would leave it to you. Regards adai.
... View more
Hi, Volume Provisioning Requirements: The second part of this thread is a question about how DFM comes up with the destination volume's size requirement. We intend to keep the same backup sets (retention and frequency) for the primary as the secondary. The volumes should be able to be the same size, but the DFM seems to error unless the Backup volume provisioned is roughly 1.3x the size of the primary volume to be backed up. Is this a setting that can be changed? What is the calculation for the destination volume size based off of? Yes by default PM looks for a secondary volume that acts as a destination for SV or QSM relationship to be 1.32x the source volume size. Rule of thumb If Volume used < 60% then1.32x source volume total size If Volume used > 60% then 2.2x source volume used size This is done to support longterm retention and some lun cases to accommodate fractional reserve IMHO, assigning a physical resouce is not a good idea for the following reasons. 1.its can be dynamically sized( both grown and shrunk) when source increases or decreases. 2.Also lot of checks like volume langugae, inode count needs to pass for that volume to be eligible for Sv destination. I strongly recommend you to use a resource pool in your secondary/SV destination of your dataset and take advantage of PM features. Regards adai
... View more
Hi Rick, What do you mean by db being deleted ? In Oc 5.0 there are couple of options which are turned from OFF to 90days that will purge the Protection Manager Job history. This may shrink the db size but I am not sure what you mean by db being deleted. Also starting OC 5.0 there is only one core license which will enable all features upto 250 nodes likes protection, provisioning and disaster recovery. This will remove all other license that you had during dfm 4.x, but this is something that you shouldn't care about. Regards adai
... View more
I think you are being hit by this known issue which is fixed in OnCommand Unified Manager 5.0.1/5.0.2/5.1 http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=560602 Regards adai
... View more
Hi You can ignore the message. The error msg is telling you that the DNS server does not have informational "SRV" records that are used in part to tell the filer where the DFM server is on the network. Its basically saying that the mechanism that allows DFM to auto-discover filers on the network is not found. BTW this is OnCommand UM/DFM and not System Manager AFAIK. Regards adai
... View more
Hi, Its very difficult to confirm from the data that you have provided as to what is the cause of OnCommand Console Slowness. But you can definitely increase the jetty server mem using the below options [root@vmlnx ~]# dfm options list | grep -i webUIM webUIMaxHeapSizeMB 1024 webUIMaxPermGenSizeMB 512 webUIMinHeapSizeMB 256 webUIMinPermGenSizeMB 128 [root@vmlnx ~]# Regards adai
... View more
Hi Nitish, Unfortunately there is no export/import function for copying the settings of OnCommand Unified Manager to multiple OCUM servers. 1. Can we create dfm reports through command line? How, to list all counters available in gui? [root@ ~]# dfm report create help NAME create -- Create a custom report SYNOPSIS dfm report create -R <catalog> -f <field> [ -L <long-name> ] [ -d <description> ] [ -D <display-tab> ] <report-name> [root@ ~]# Refer the man pages as follows and look for dfm report create. Launch the Operations Manager Console> Control Center> Help>General Help > Man Pages. 2. Is there a way to replicate the same reports on other dfm servers? Yes, using cli create the same. You can get the list of columns in each report you created as follows. dfm report view <reportname-or-id> help. Then create the same report with all the columns in another server. Regards adai
... View more
Hi, Between 4.0.2D5 and 4.0.2D12 there is no functionality changes only some bug fixes and upgrade of apache and open ssl version is different from D5 to D12. BTW I would strongly recommend you to upgrade to 5.0.2 which is the current GA release and take advantage of the 64 bit architecture and new naming conventions. Regards adai
... View more
So, dedupe is not the issue, as its not enabled. What is the protection policy that you are using ? in your storage service ? Regards adai
... View more
What is the protection policy that your storage service is using ? Is SV primary and SV secondary properly licensed on the source and destination Controller ? Is dedupe enabled on the secondary volume if so what type of dedupe ? Is it scheduled, automated or On-Demand ? Only On-Demand dedupe will create SV relationships others will create QSM. Regards adai
... View more
Hi Rick, Let me restate what I said, Provisioning Manger Doesnt allow creation of more than one vfiler per dataset. Also when a provisioning policy is attached the primary of a dataset more than one vfiler cant be added to the primary. But more than one vfiler can be part of the primary of a dataset if primary provisioning policy is not attached. 1 vfiler on primary and 1 vfiler on secondary with dataset having primary and secondary provisioning policy is allowed and supported. Regards adai
... View more
Hi Juerg, When delete the dataset, automatically the secondary volume are deleted. Or if you remove a secondary volume or qtree from a dataset after 2 hour ( thats when the reaper kicks in) it gets deleted. Pls find attached a detailed doc on how relationships are deleted.
... View more
Hi Keith, Provisioning Manager allows only 1 vfiler Per Dataset for primary provisioning. So in order to achieve your requirement, why don't you create two dataset, with each one vfiler ? If you are only looking at protecting vfiler primaries, which are already provisioned, then more than one vfiler is supported in a dataset( only caveat, primary provisioning policy should not be applied to the dataset). Regards adai
... View more
Hi Mathew, Pls keep in mind that this is not a supported configuration even if you make it work. As support would not honor this if you open a case with them. Regards adai
... View more