Hi, I am trying to use two commands (namely "Refresh Monitor on Array" and "Wait for Monitor Refresh") and running them against an OnCommand 5.0 box. I have setup the OnCommand box as a datasource and that is working correctly. I have also added it under the Credentials section where using test button it works correct. However when I run the above commands I receive. Error: DFM Database Write permission denied on <<filer_name>> The account has GlobalFullControl. The only thing I can think of is when I log onto the dfm terminal I must run DFM commands with the keyword sudo. I do not know the root password as that is part of the security framework that this server must abide by to be part of our environment. Could this be the problems or am I missing something? Cheers, Braiden
... View more
I have been using version 1.14 of the API with Data ONTAP 7.3+ for a while. Now we are starting to upgrade to ONTAP 8 and I am finding some of our API commands are no longer working. Is there anyway around this? Or is there any plan to increase the version of the API ONTAP 8 will support? Thanks
... View more
Solved this problem by trial and error on some options ndmpDataUseAllInterfaces 1 This option allowed DFM to use/try more than the default data interface. It now works correctly
... View more
We still haven't got to the bottom of this with support. I have run up a fresh installation and the same problem. It is interesting OnCommand recognises what interface it should use then ignores it! dfpm job detail 30 *data omitted* Event Id: 415 Event Status: normal Event Type: job-progress Job Id: 30 Timestamp: 28 Sep 2011 09:32:43 Message: Retrieved preferred interfaces (10.77.10.28) Error Message: *data omitted* Event Id: 420 Event Status: normal Event Type: rel-create-progress Job Id: 30 Timestamp: 28 Sep 2011 09:32:44 Message: Creating backup relationship with vz4aixsas09:/vz4aixsas09_unstr_test01/test01 via the default interface Error Message: Source Volume or Qtree Id: 3157 Source Volume or Qtree Name: vz4aixsas09:/vz4aixsas09_unstr_test01/test01 Destination Volume or Qtree Id: 0 Destination Volume or Qtree Name: bbsq01a:/testb_06/vz4aixsas09_unstr_test01_test01 Bytes Transferred: 0
... View more
Hi all, We have recently upgraded from Protection Manager 4.0.1 to OnCommand 5.0. Some notes... All of our filers have a mangement port and a backup interface. All the filers have the option ndmp.preferred_interface set to the backup interface. All of our filers have the hostPreferredAddr1 set to the backup interface in OnCommand Most of the relationships that were created by 4.0.1 look like this host.domain:/voll/volumename/qtree nearstore.domain/vol/volumename_backup/qtree host.domain resolves to the management address NOT the backup interface. This all worked in 4.0.1 as it just used the ndmp.preferred_interface to run the backup. Now we have upgraded to OnCommand 5.0, it seems to be trying to update the relationship as is, ie. run the backup via the management port which is blocked. How do we force OnCommand 5.0 to use the backup interface? Thanks, Braiden
... View more
FAS6080 We have 42 ESX Hosts We have approx 1500 Hosts I think the filer is overloaded yes, but i beleive it is because of the number of partial writes that are occuring. If all VMs (especially the highest IO ones) were aligned I would think the filer could handle the given workload quite comfortably. We are utilising NFS datastores. I have engaged NetApp support but i am asking here to try and get some information from people that may have experianced VM alignment problems before. Another Note: I have written a script to poll the filer every 15mins and get pw.over_limit stat from wafl_susp -w. I have found at times this number grows by 3000 counts /s. See attached graph over_limt. These large spikes correspond to when we see massive latency jumps on our filers (4am everyday). We are still trying to work out what happens at this time to cause this massive IO spike (and subsequent latency spike), but i still beleive the root cause is unaligned VMs. Any comments appreciated.
... View more
Hi all, We are experiencing performance problems in our environment and it points to partial writes. We are seeing back-to-back CPs that is causing a spike in latency across all volumes on a filer to above 500ms. We have contacted NetApp support and they have said yes it is partial writes and it is probably caused by ESX. The filer is almost dedicated to ESX so it has to be ESX, we know all our VMs are unaligned but short of aligning 1000’s of VMs we want to target a few that are causing the most havoc. How can we narrow it down to a VM level accurately, which ones are causing us the most pain? Cheers.
... View more