Active IQ Unified Manager Discussions

NetApp WFA 3.0(P2) REST temporary timeout internal server error 500


I was testing how long it takes for WFA to notice when an export policy is removed directly from a vserver. I tested this using a Ruby based REST client, and just looped it to run a WFA filter that just returns the export rules of the export policy. Obviously, if the policy doesnt exist, then it wont spit out any rules.


Intermittently, my REST client would report Internal Server Error 500. In the backend, its always a message like:


Aug 25 10:49:37 WFA [com.netapp.wfa.ui.TesterFacadeImpl] Method testFilter("Filter{id=126, name=...","{vserver_name=vsbbcs...",true,1000) threw:
Aug 25 10:49:37 WfaException{Message: Temporary timeout failure, please retry, Cause: null}


Followed by a humungous Java stack trace.


Has anyone seen similar issues like this? How can I fix this? If its a timeout issue, maybe we can increase some timeout interval parameter or make the REST service more robust? Any comments and/or suggestions would be highly appreciated. I've got a support case opened for this, but it hasn't received much traction, unfortunately...









I can't speak to the error you are encountering, but I can talk a bit about how long it takes WFA to detect changes.  WFA will poll it's data source, Unified Manager (OCUM), every 15 minutes by default (I think).  OCUM queries the connected storage controllers every so often as well (15 minutes I believe), so it can take some time for changes made outside of OCUM/WFA to be surfaced up through the layers.  The WFA reservation system was created to alleviate some of this, for example when WFA makes a change to the storage system but OCUM hasn't re-polled to detect the change.


You can increase the frequency that the polling takes place by editing the data source in WFA, then changing the interval value.  




Note that doing this will not increase the frequency that OCUM polls the storage systems.  You can increase this setting, but be aware that it may cause additional load on your OCUM server and on your storage systems if you poll too frequently.



If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.


Yep! I'm fully aware of the data source polling, and how it affects the overall accuracy of what WFA sees. We have this set at 5 minutes, which is a decent compromise between accuracy and (over)loading our OCUM instance. In unrelated news, we most likely have some kind of data issue between OCUM and WFA, as even after 1.5hrs our WFA instance still sees that deleted export policy.. that's another issue heh.


I'm starting to think the REST service in WFA isnt as robust as it could be. Either that, or we have some data corruption in our WFA instance. Maybe upgrading to OCUM 6.x would help? I'm surprised no one else is getting slammed with these timeout failure server error messages!