Your --s 3600 here mean sample rate Sample rate (in seconds). This interval will be used to consolidate the output data. The available data will be split into regions as specified by the sample rate and the last sample in each of those regions will be displayed. Also used for window calculation for metrics. Use -s 60 and you should be able to achieve what you want. Regards adai
... View more
Hi Andreas, So there is no other central scheduling/monitoring/management tool for all Snapmanger Products? Whats about OnCommand Suite? Yes, what do you mean by OnCommand Suite ? Protection Manager will mainly used to manage Snapvault connections to the Secondary Storage, cause primary its a MetroCluster. So this its not possible then with SMHV? Protection Manager/OnCommand Unified Manger integrates with all snapmanager except SMVI/VSC and SMHV. For SMVI/VSC they have something called host package. Regards adai
... View more
Hi Thomas, First update of a Mirror connection after the source has shrunk will generate a false error message. In situations where Dynamic Secondary Sizing is enabled for a mirror relationship and the source volume has shrunk, the next update will generate a secondary re-sizing error message that can be ignored.The update job might will an error for the resize but the mirror job will complete fine. The error message is generated because Data ONTAP issues an error message stating that secondary resizing cannot happen because the resize is smaller than the active file system. This error message can be ignored if seen only once on the first update after the source volume has shrunk. The displayed message reads as follows: The new volume size would be less than that of the replica file system. This issue fixes itself and no backups are lost so no workaround is necessary. Error Message: myDfmStation: Could not resize secondary volume myFiler:/myVolumeName (10565) to 10.0 GB. I understand your concern and I think this should instead be made a warning and not an error. BTW I am not going to give you further responses until you add a case for the upgrade issue you raised today Regards adai
... View more
Hi Matthew, When you said many I mistook it for 100+. 11 is not at all a big number or lot for DFM as well. In fact I would suggest you to run the purge too couple of days after you upgrade to 5.1 for the following reason. Starting 5.0 we turned the options of jobsPurgeOlder than from OFF to 90 day. This will purge any protection jobs older than 90 day. In order to reclaim/de-fragment the data a db reload and unload is required, even purge utility also requires the same. So the purge tool couple of days after 5.1 will upgrade will give you maximum benefit in minimum down time. Regards adai
... View more
Hi Christophe, Why do you want to restore only DB and not Perf Data ? Dont you need your performance Adviser data ? Also disabling perfadviser will only stop further data collecting but not delete or purge already collected performance data. Regards adai
... View more
Hi Christop, Unfortunately this is not possible in the current OCUM 5.x. But its possible to write a simple script plugin to report and monitor on the same. BTW why do you want to monitor the same ? What do you plan to do based on this event ? I am trying to understand why and what you are trying to accomplish. Given the fact that aggr scrubbing is an internal ONTAP process. Regards adai
... View more
Hi Thomas, OCUM 5.1 introduced what is called as Dynamic Secondary Sizing for mirrors. This is not something to worry and is expected. Take a look at this GSS video to get more info the same. Looks like you successfully upgraded to 5.1, so did the workaround help you ? 7 Mode Protection Enhancements Regards adai
... View more
Hi Mathew, Currently the customer's DFM server is running extremely slowly on Windows 32-bit with only 4 GB of RAM. Many Snapmanager products are in use, including SME, SMSQL and SMO. Can you tell us how many SnapManagers ? The SnapManager datasets are little heavier compared to the normal dataset and puts load on the dfm server. 1) migrate the DFM server to windows 64-bit with much more memory (I'm thinking 16GB). Given that you already have quite few dataset, pls increase the memory further to 24 GB or some thing. As compared to 4.x which was a 32 bit application 5.x being a 64bit application will give better performance as it scales with the hardware. 2) upgrade DFM from 4.0.2 to 5.1. Given that you have lot of SnapManager I am sure you have quite a lot of mark-deleted objects, I recommend you to run the dfm purge too and upgrade or upgrade and run the dfm purge tool. This way in one maintenance window you will be able to do both, as the purge tool requires downtime. Here is the link to the purge tool ToolChest Link: Utility Toolchest. KB Article link:link Video Link: DFM Purge Tool: How to Video I think the steps I'll need to follow are: 1) back up the existing 4.0.2 DB. 2) stand up the new server (newdfm.local) 3) remove the old server (olddfm.local) from the domain 4) join the new server to the domain as olddfm.local 5) install 5.1 and restore/recover the DB from the backup I made in step I would probably do it this way to reduce the impact or downtime to backup/Schedule Setup the new server ( with temp domain name) Install 5.1 in 7-Mode Backup up the db using dfm backup create in existing 4.0.2 Remove the 4.0.2 dfm server from domain Change the new server DNS names to olddfm.local Restore the backup taken in step 3 Run the purge tool BTW if you have perf advisor enabled you may see some sluggishness initially after upgrade to 5.1. As we purge all the stale perf data due to clones created by SnapManager and later in a week or 2's time you will see a marked performance improvement. I'm particularly worried about losing the existing backups/datasets. Is this the right approach? Since you are doing a restore of the database you will not loose any backups or datasets. What you are doing is perfectly correct and right approach. Regards adai
... View more
HI TC, I dont know what is not clear. Can you pls read this bug 206891 ? DFM is monitors for SnapMirror related things once every 30mins, so it can only report on the lag times of a relationship. In order for dfm to generate these events either dfm should get traps or the action itself should be initiated from dfm so that it can generate the events. Hope this helps and clarifies. 1) external snapmirror relation, abort intialize by CLI, will DFM receive event (warning event)? No, because dfm doesn't get a trap and it only monitors once every 30mins. 2) import snapmirror relation to dataset, but still abort it by CLI, will DFM receive event? No. Same as above. 3) import snapmirror relation to dataset, operation by PM, will DFM receive event? (any level?) Yes. Since the operation is initiated from dfm/PM it will receive the response and trigger the event. Regards adai
... View more
You can use Protection Manager for SME/SMQSL/SMO and OSSV but not for SMHV. But in all these cases except for OSSV the schedules are manged by individual snapsmangers and not protection manger. Regards adai
... View more
The reason we dont generate the events you are expecting is, these relationships are external. We generate all the events only when dfm manages these releationships and not when it only monitors. Yes you will have to import them into a dataset in order for dfm to manage it. Regards adai
... View more
Hi TC Thats exactly what I said, if you do these in filer cli and not with dfdrm or dfpm cli you will not receive any snapmirror events other than lag error and warning events. Regards adai
... View more
Hi Rainer, The event basically means that a relationship which was already known to dfm and later got broken or deleted got rediscovered. Thats what it means. To generate this event you can do the following. Create a snapmirror relationship (using either ProtMgr or BCO). Delete the relationship using "dfdrm mirror break" and "dfdrm mirror delete". Re-establish the relationship from the ONTAP CLI using "snapmirror initialize". Re-discover the relationship by refreshing the source and destination host monitors. Now, run "dfm report view events-history <dest-obj-name>" where "dest-obj-name" is destination volume or qtree name. If you are not doing step 2 or 3 and you are encountering this event could be due to a unreliable monitoring due to network issues or SNMP timeouts between dfm and your controllers. Regards adai
... View more
Hi Mark That's right in OCUM 5.2 which is currently in BETA OnCommand Unified Manager 5.2 Beta Program its only done during upgrade to or restore in 5.2. Yes the online thing should be possible but not sure about the release its targeted. Regards adai
... View more
Hi Andreas, SnapCreator should help you will all this by integrating with Protection Manager. PM will do all the monitoring and running the backups, SnapCreator would do all the scheduling and single pane of glass for all your apps. Regards adai
... View more
Hi TC, When you say I initialized, abort etc where did you do it from ? Is it from Protection Manger or Disaster Recovery Manager ? If its done from the PM/DRM you will see all these events else you will only see below to events. snapmirror:date-ok Normal sm.lag snapmirror:nearly-out-of-date Warning sm.lag Same is the case when you do it from filer view/SystemManager/Filer cli. You can also check the following thread on the same topic. https://communities.netapp.com/message/93280#93280 Regards adai
... View more
Hi Kuber, The Group status is based on the worst possible status of any object associated with the group. In your case the should be some events with severity Error associated with one of the 2 filers, but that event may have been deleted. This may be a reason why when you go to events page you see nothing. Though the event has been deleted, but the condition because of which the event was triggered hasn't changed the status of the object remains the same. Because of which you are encountering what you described. Check the events-history or event-deleted report to see if there are events associated with those 2 controllers and event condition remains the same. BTW the group status is a visualization to tell you some thing that belongs to the group is being affected. If you are already attending to individual events you shouldn't care much about group status. In fact its practically impossible to have the group status as green unless your group is empty Regards adai
... View more
Hi Francois, As clearly said in the KB article this tool only deletes Mark deleted objects. The events pruning that you are looking for is coming in OCUM 5.2 which is currently in BETA. In 5.2 we purge the mark deleted objects as well as events. This will happen both during upgrade to 5.2 as well add every time you restore your db.
... View more
We have this purge tool today that take care of cleanup all this. Dfmpurge which will remove all these stale instances but requires down time. The utility has 2 modes and gives the estimation of downtime required as well. In most cases it shouldn't take more than 30mins to cleanup. Pls take a look at this video ( 3.43 Mins) and read the KB on how this tool works. Video Link: DFM Purge Tool: How to Video KB link: https://kb.netapp.com/support/index?page=content&id=1014077 Link to tool chest: http://support.netapp.com/NOW/download/tools/dfmpurge/ Regards adai
... View more
We have this purge tool today that take care of cleanup all this. Dfmpurge which will remove all these stale instances but requires down time. The utility has 2 modes and gives the estimation of downtime required as well. In most cases it shouldn't take more than 30mins to cleanup. Pls take a look at this video ( 3.43 Mins) and read the KB on how this tool works. Video Link: DFM Purge Tool: How to Video KB link: https://kb.netapp.com/support/index?page=content&id=1014077 Link to tool chest: http://support.netapp.com/NOW/download/tools/dfmpurge/ Regards adai
... View more
We have this purge tool today that take care of cleanup all this. Dfmpurge which will remove all these stale instances but requires down time. The utility has 2 modes and gives the estimation of downtime required as well. In most cases it shouldn't take more than 30mins to cleanup. Pls take a look at this video ( 3.43 Mins) and read the KB on how this tool works. Video Link: DFM Purge Tool: How to Video KB link: https://kb.netapp.com/support/index?page=content&id=1014077 Link to tool chest: http://support.netapp.com/NOW/download/tools/dfmpurge/ Regards adai
... View more
Hi Brano, The product doesn't do it on a periodic basis even in version 5.1. But we have done quite a bit of improvement in in OCUM 5.2 during the upgrade process were we purge all these stale instances and keep the embedded db in clean state. For versions until 5.1 we have a utility called dfmpurge which will remove all these stale instances but requires down time. The utility has 2 modes and gives the estimation of downtime required as well. In most cases it shouldn't take more than 30mins to cleanup. Pls take a look at this video ( 3.43 Mins) and read the KB on how this tool works. Video Link: DFM Purge Tool: How to Video KB link: https://kb.netapp.com/support/index?page=content&id=1014077 Link to tool chest: http://support.netapp.com/NOW/download/tools/dfmpurge/ Regards adai
... View more