Hi Marcinal, Since you want to clean up the db, and I assume you can afford downtime if so stop all dfm service, start only sql service and do a event delete using -f cli in a loop cli. Regards adai
... View more
Hi Richard, There are two things on which thresholds are not supported in PA. Unmanaged Objects, like processors and flash card. ( Basically those that are not available in Ops-Mgr but in PA) A label counter.(It is of the format.appliance-name:object-name:instance-name:counter-name:label1-name:label2-name) Regards adai
... View more
Hi Jan, There are lot of reports that give the size/ capacity of different objects being monitored by dfm.For cache hit rate refer the following KB. https://kb.netapp.com/support/index?page=content&id=1012673 Regards adai
... View more
Hi Todd, The other reason could be that when trying to increase(Resize the volume) the over commitment of the containing aggr can exceed which would prevent the volume from re-sizing.Adding some extra logging would tell why exactly its creating a new volume. Also what is the max fan-in ratio ? Would a webex be possible to find out the root cause ? Regards adai
... View more
Hi Marcinal, dfm database query run are read-only views so you wont be able to delete. Also doing a dfm host delete -f does not have a cascade delete on the events table. So events will still be there. As advised by others you can raise a case and get this cleaned. BTW how many millions of events do you have ? Is there any specific reason why you want to delete the events ? Regards adai
... View more
Hi Reuvy, As I said earlier, only those documented under the following link in you dfm server has view and we dont have a view exposed for domains. http://<dfm server name/id>:8080/help/dfm.htm#%3E%3Ecmd=1%3E%3Epan=2 Database schema Database schema for DataFabric Manager non-historic data Database schema for DataFabric Manager historic data Relationship among fields of various database views Regards adai
... View more
Hi Cecil, Backup dir location can be on a NFS location. BTW -L is a hidden switch to skip validation of the database. A normal db backup does verification and validation, that fails in your case.But when validation is skipped it succeeds only means that there are some validation problem in the db. I strongly recommend you to open a case with NetApp Support to get your db fixed. Regards adai
... View more
Hi Reuvy, I tried the same and logged into the DFM box( running on windows as Administrator) C:\>dfm database user list There are no database users or the specified database user does not exist. C:\>dfm database query run "SELECT * FROM objectView" Error: Database access denied. Enable the database access to one of the database users using 'dfm database access enable' CLI. C:\>dfm database user create -u db_user -p dbuser123 Created database user 'db_user'. C:\>dfm database access enable -u db_user Enabled database access for user 'db_user' C:\> C:\>dfm database query run "SELECT * FROM objectView" | more "objId","objName","objFullName","objDescription","objType","objStatus","objPerfStatus" "1","vmwin186-206","vmwin186-206","","Mgmt Station","Error","Unknown" "2","GlobalRead","GlobalRead","View information in DataFabric Manager","Role","Unknown","Unknown" "3","GlobalQuota","GlobalQuota","View user quota reports and events","Role","Unknown","Unknown" "4","GlobalWrite","GlobalWrite","View and modify information in DataFabric Manager","Role","Unknown","Unknown" "5","GlobalDelete","GlobalDelete","View, modify and delete information in DataFabric Manager","Role","Unknown","Unknown" "6","GlobalBackup","GlobalBackup","Create and manage backups","Role","Unknown","Unknown" "7","GlobalRestore","GlobalRestore","Perform restore operations from backups","Role","Unknown","Unknown" "8","GlobalMirror","GlobalMirror","Manage replication and failover policies","Role","Unknown","Unknown" "9","GlobalSAN","GlobalSAN","Create, expand and destroy LUNs","Role","Unknown","Unknown" "10","GlobalSRM","GlobalSRM","View SRM path walk information","Role","Unknown","Unknown" As you can clearly see that untill I create a db user I and not able to access the views. Regards adai
... View more
If your dataset is running any other protection policy other than Mirror/DR Mirror you will face this. I suspect your dataset is running Backup Protection Policy. Regards adai
... View more
Hi Reuvy, You used a unsupported cli to query the db directly which is not supposed to be used. The user creation was to access the readonly views exposed, where as you accessed the db directly which can be done by any user who belong to the local administrator group of that windows box as by virtue of being an user of admin group they get DFM global full control capability. Pls refrain from using the cli as customers are not supposed to query the db directly except when asked by netapp support. Hope this helps. Regards adai
... View more
Hi Reuvy, We dont have a view for domains. The list of database view that are exposed are documented along with their schema under the following location. http://<dfm server name/id>:8080/help/dfm.htm#%3E%3Ecmd=1%3E%3Epan=2 Database schema Database schema for DataFabric Manager non-historic data Database schema for DataFabric Manager historic data Relationship among fields of various database views Regards adai
... View more
HI Yannick, As I said earlier the latest version of snap creator helps you to register external snapshot not take by SC to register as a backup version in PM. This way you can register the SMHV created snapshot using SC into PM. Isnt it what you are looking for ? Regards adai
... View more
Hi Adam, Once you rename your volumes, snapvault modify needs to be run to notify the snapvault relationship that the volume names have been changed. Thats the reason you get the mentioned NDMP error.BTW how do you like to name your snapvault destination volumes ? Regards adai
... View more
Can you help us with the sizes of your primary volume, backup volume and mirror volumes ? Also is dedupe enabled on the snapvault destination ? Regards adai
... View more
Hi Todd, Dynamic Secondary Sizing / Fan-In happens only when the following conditions are satisfied, Maxrelspersecondaryvolume - is not exceeded which is 50 by default. PlatformDedupeLimit -- If the volume is dedeup enabled resizing the secondary volume becasue of multiple source volume should still be within the limit. Volume Language.--- If the source volume languages are different they can be Fan-Ined to same destination volumes as it causes problems during restore. The DSS is calculates as follows PM uses DSS (disabled when upgraded from 3.7) Projected Secondary Size = 1.1 * maximum [(2.0 * Primary Current Used Size), (1.2 * Primary Volume Total Size)] 1.1 is a fixed value 2.0 is set by an option A 1.2 is set by an option B If dpMaxFanInRatio is > 1, the primary volume sizes are replaced by the sum of all volumes fanning into the secondary volume. Rule of thumb Volume used < 60% then1.32x source volume total size Volume used > 60% then 2.2x source volume used size Option Name: dpDynamicSecondarySizing Hope this helps Regards adai
... View more
My suggestion is to open a case with NetApp Global support for this issue as it would need some data collection and correlation. Regards adai
... View more
Hi Chris, As Earls said its simple. But make sure none of these filers are in a protection relationship either using protection manager/BCO. As you may find the conformance /jobs failing because of this. Regards adai
... View more
Easiest way is to get snapcreator and write a plugin for Hyper -V. It might as well exist. Or you can use snapcreator to register external snapshot( in this case the ones taken by SMHV). Regards adai
... View more
Hi Shep, Here are your steps to clean up an redundant relationship. Make sure your dpReaperCleanupMode is set to ORPHANS or NEVER. A relationship is marked redundant only if for a given source qtree you have more than one destination and all of that being managed in the same dataset. Use the dfpm relationship list -r command as well as the dfpm dataset list -m <dataset name> and you should see the two relationships with same source names. But Note that only 1 may be marked as redundant as it is redundant in terms of the other one Now for the redundant relationship do a snapvault stop on the storage system to stop the relationship that is marked as redundant. After about 2 hour you should see that your datasets goes back to conformant state. Regards adai
... View more
Is the controller being changed is source or destination ? If its destination use the secondary storage migration. NMC>Hosts>Aggregates>Manage Space >Migrate Volume. If its primary let me give you a process. Regards adai
... View more
Hi Yannick, Nothing has changed. Either you will have to use one of the snapmanager to do what you want, or use snapcreator, which can register any snapshot with PM. All in all a normal dataset takes its own snapshot and does the update where in case of application dataset a named snapshot registered with PM is used to update the relationship. Regards adai
... View more
Hi Scott, Just to make clear that I understood, what you are describing so that both are on the same page. You are actually trying to migrate the volume using NMC>Hosts>Aggregate>Manage Space>Migrate ? If so you will have to make sure the following conditions are satisfied. No Client Facing Protocols like NFS/CIFS/ISCSI/FCP No unmangaed relationship (meaning outside dataset) No Child Clones Not a root volume of a filer/vfiler You can use the same old magic cli dfm host discover to discover all the changes and update the DFM db. Once we find that we satisfy the above 4 the volume is ready for migration. Regards adai
... View more
Hi Richard, AFAIK, these are things that trigger client stats. I don't know of anything other which triggers. I suggest you to work with support to nail down the issue. Regards adai
... View more