Ideally there is no limitation. Starting Ops-Mgr version 3.8.1 we fixed issues some performance issues due to groups.. But the sizing is only done for 250 in DFM 4.0 Check the sizing for more info. http://media.netapp.com/documents/tr-3440.pdf But 1000 groups shouldn’t be a problem.And theoretically there is no limitation. 3.8.1 or latter versions of DFM you shouldn’t have problems with 1000 groups. Regards adai
... View more
Hi fletch, Its a nice post that PA is being used for diagnostic and nail down performance problems.The only commen I have is the thresholds can be created from NMC UI too. By two ways. By right clicking on the tree map in the left hand side view.Add threshold.See attached pic one Also by clicking on the Action butoon in the graph/chart, this give options like do a trending and baseline saying what will be a best value for threshold based on the history or the time range specifed. Regards adai
... View more
yes we are also getting rid of spaces.For the time being you can turn the below option on. [root@lnx ~]# dfm options list pmUseSDUCompatibleSnapshotNames Option Value ------------------------------- ------------------------------ pmUseSDUCompatibleSnapshotNames No [root@lnx ~]# Regards adai
... View more
NMC shows the time in the local time of the server on which its is running. But snapshot names in the filer have GMT time stamp. Regards adai
... View more
Hi Can you check if dfm install dir has sufficent space, as we suspend monitroing if its less than 10 %, though the dfm service list may show the service is running. [root@lnx ~]# dfm about Version 4.0 (4.0) Serial Number 1-XX-000001 Administrator Name root Host Name lnx Host IP Address 10.X.X.X Host Full Name lnx186-118.lab.eng.btc.netapp.in Operations Manager Node limit 999 (currently managing 10) Provisioning Manager Node Limit 999 (currently managing 7) Protection Manager Node Limit 999 (currently managing 6) Operating System Red Hat Enterprise Linux AS release 4 (Nahant Update 5) 2.6.9-55.ELsmp x86_64 CPU Count 2 System Memory 3016 MB (load excluding cached memory: 61%) Installation Directory /opt/NTAPdfm 101 GB free (69.9%)<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<Check if you see a error here. Can you also check for the same using dfm diag | grep -i management. If thats the case its better to setup alarms for management station events. [root@lnx ~]# dfm eventtype list | grep -i "dfm.free.space" management-station:enough-free-space Normal dfm.free.space management-station:filesystem-filesize-limit-reached Error dfm.free.space management-station:not-enough-free-space Error dfm.free.space [root@lnx ~]# Reagrds adai
... View more
Theoretically there is no limit. But the ontap can only support 128 simultaneous OSSV relationship, anything beyond that is queued up. As per the Sizing guide(http://media.netapp.com/documents/tr-3440.pdf) upto 400 OSSV relationship are tested.But there are customer way beyond that. Also the nubmer of OSSv relationships per secondary volume is 50.But this is configurable. Also its not a good idea to fan-in more than 128 OSSV rels to a destination volume as if dedupe is configured in the destination, and since only 128 stream are available,any thing more than 128 has to wait for the previous one to compete, there by delaying the dedupe job which runs at volume level. Regards adai
... View more
vol status -v <volname> will tell you what are the volume clones to this flex vol and its dependency using which you can identify this snapshot. Regards adai
... View more
Hi Below is what the man pages say. This option determines whether a particular snapshot is allowed to be deleted by autodelete. Setting this option to try permits snapshots which are not locked by data protection utilities (e.g. dump, mirroring, NDMPcopy) and data backing functionalities (e.g. volume and LUN clones) to be deleted. Snapvault snapshots are not locked and thus are not protected from autodelete by the try option. Setting this option to disrupt permits snapshots which are not locked by data backing functionalities to be deleted in addition to those which the try option allows to be deleted. Setting this option to destroy in conjunction with the destroy_list option allows autodelete of snapshots that are locked by data backing functionalities (e.g. LUN clone). Since the values for the commitment option are hierarchical, setting it to destroy will allow destruction of the snapshots which the try and disrupt options allow to be deleted. Regards adai
... View more
Hi Emanuel, When you have a SnapVault relationsihp its only the destiniaton snapshot that is busy. Like the one below. madaan-vsim3> snap list sv_ds_backup Volume sv_ds_backup working... %/used %/total date name ---------- ---------- ------------ -------- 0% ( 0%) 0% ( 0%) Sep 13 08:47 madaan-vsim3(987640-32-0)_sv_ds_backup-base.5 (busy,snapvault) 1% ( 1%) 0% ( 0%) Sep 13 08:25 2010-09-13 02:42:14 daily_madaan-vsim3_sv_ds_backup.-.sv_ds_madaan-vsim1_sv_ds.test_qt1 madaan-vsim3> But not on the source.It will look like this, but they are not busy. madaan-vsim1*> snap list sv_ds_1 Volume sv_ds_1 working... %/used %/total date name ---------- ---------- ------------ -------- 34% (34%) 0% ( 0%) Sep 13 09:23 dfpm_base(sv_ds.2160)conn1.0 (snapvault,acs) 48% (29%) 0% ( 0%) Sep 13 09:23 2010-09-13 03:40:06 daily_madaan-vsim1_sv_ds_1.-.second 57% (29%) 0% ( 0%) Sep 13 09:13 2010-09-13 03:30:06 hourly_madaan-vsim1_sv_ds_1.-.second madaan-vsim1*> Regards adai
... View more
QSM/SV based will allow you to replicate from any ONTAP version to any other. But VSM will only allow from lower to higher ONTAP version. Is 7.1->7.2 and not vice versa. But within a release family its supported. Like 7.2.7->7.2 Regards adai
... View more
When provisioning manager sets up autodelete its does the following settings, Information: Change autodelete setting on volume VolToBeProvision:thinlyprov_lun (32): [ commitment=destroy ],[ defer_delete=prefix ],[ delete_order=oldest_first ],[ destroy_list=lun_clone,vol_clone,cifs_share ],[ prefix=dfpm ],[ state=on ],[ target_free_space=5 ],[ trigger=volume ] Similarly what you can do is setup up defer_delete=prefix and prefix=dfpm for your snapshot autodelte. Regards adai
... View more
HI Emanuel, For QSM and VSM to use specific interface set the following options. [root@lnx ~]# dfm host set Valid options are hostLogin Login hostPassword Password hostLoginProtocol Login Protocol hostPreferredAddr1 Preferred IP address 1 hostPreferredAddr2 Preferred IP address 2 Set this for the source filer and similarly for the destination filer also. dfm host set <filername-or-id> hostPreferredAddr1= x.x.x.x For SV relationships use the filer options. ndmpd.preferred_interface disable to an interface name like e0b or e0c. Below is the FAQ on the same. https://now.netapp.com/NOW/knowledge/docs/DFM_win/rel40/html/faq/index.shtml#_15.9 Even though this talks about backup manager this is applicable to PM created SV relationships too. Regards adai [root@lnx186-226 ~]# dfm host set Valid options are hostLogin Login hostPassword Password hostLoginProtocol Login Protocol hostPreferredAddr1 Preferred IP address 1 hostPreferredAddr2 Preferred IP address 2
... View more
Hi Emanuel, PM is not intelligent to know the volumes that are moved outside manually. But for moving a volume from the primary we have two ways, but both are vFiler migrate, online(Data Motion) and Offline migration. For moving SV/QSM/VSM destination volumes we have Secondary space management which will move the volume from one aggr to another within the same or to a different filer. Yes you will need to create a RP out of the aggr to which you are planning to migrate, and add it to the respective nodes of the dataset. If you move the volume outside pm, then you will have to remove the relationships, from the dataset, do the manual process, and then import them as external relationships. By this you lose old backup version, and will have to manually take care of the downstream relationship if any. But you VFiler migrate or SSM PM will take care of all this. Regards adai
... View more
Hi Brian, I think you are hitting this. Can you check for this if you have write permission on the reports archive dir. You can modify the location of the destination directory by using the following CLI command: dfm options set reportsArchiveDir=<destination dir> When you modify the Report Archival Directory location, DataFabric Manager checks whether the directory is writable to archive the reports. In the case of a Windows operating system, if the Directory exists on the network, then the location must be on a UNC path. Besides, the scheduler and server services must run with an account that has write permissions on the directory. To run as a user account that has write permissions on the Report Archival Directory, configure the scheduler service using Windows Service Configuration Manager. Note: You require the Database Write capability on the Global group to modify the Report Archival Directory option. Regards adai
... View more
compared to earlier release we are better. BTW, there is no limitation from PM side, its more concerned about the nubmer of datasets, that too will affect your performance of dfm server and not the relationships itself. Regards adai
... View more
For a SnapMirror destination volume, yes it will be the replica of the source. But for SV/QSM relatationship destination volume it will be the below ones. volume guarntee is none. snap sched will be disabled. FR will be 0. Regards adai
... View more
Hi Reide, The info in the IMT says so, below is the link to same. http://now.netapp.com/matrix/configuration/showDetailsPage.do?configVersionId=52509&activateNotesTab=true I am also pasting the contents for you convenience. “DataFabric Manager Server 4.0 and above supports VMware VMotion and VMware High Availability features for - VMware Infrastructure 3 version 3.5 - VMware vSphere 4” All rules of vMotion and VMware HA apply. Regards adai
... View more
Hi Can you check what’s the value for this options in the filer where it tried to create NFS exports. nfs.export.auto-update Also can you get us the details like Dfpm dataset list –x for the error you mentioned. Regards adai
... View more
Operations Manager does not use the RLM cards on the filers to do monitoring or management. It just gives you a way to run commands via the RLM card from Ops-Mgr itself.It also monitors the RLM and its status. No functionality of OM would be missing if the filer doesn't have RLM card. Regards adai
... View more
Hi Babar, I am just curious of what makes you go for crontab, when a better, functionality can be achieved in OM ? In the report schedule itself you can specify the format of the report like Perl, xls etc. In fact the reports generated by scheduled reports are stored in the report archive dir. If you would like to make this available in a web location you can change the report archive dir location you using the dfm option for the same. Regards adai
... View more
You can’t pass the snapshot name to the cli, you will have to use the SDK api itself, so that you can pass the snapshot name. Regards adai
... View more