Active IQ Unified Manager Discussions

how frequently can we update the cache before it starts causing problems?

NetApp Alumni


since WFA depends on data cached from DFM, and DFM depends on data polled from the real world, what sort of limits should we put on this chain of data & how frequently can we set refreshes of the environment to take new objects into account?

for example, assume DFM refreshes its volume list every 15 min , and WFA refreshes its cache from DFM every 10 mins.  this means that if WFA creates a new object, it could be up to 25 minutes before the new object is reflected in the WFA database

in testing we're setting the numbers to more like 3min for DFM and 2 min for WFA, but in a real environment where there is potential for many requests to come in from an orchestration system at once, we need a way to (at least) keep names in sync:  one of the most common failures during testing is trying to create an already-existing volume or qtree, since the cache hadn't been refreshed & that name hadn't been found when checking for existing volumes or qtrees.

in the long term - integration of databases, etc will probably help with some of this.

in the short term - understanding what settings we can tweak without blowing up DFM or spamming controllers would be helpful...

thoughts?  would something like an SNMP-trap system be a better option?  such as the controller raising a trap when a new object is created, and then DFM prompting WFA with the update ?



Hi Peter,

You are correct.  Currently the process for refresshing the WFA 1.0 cache is that WFA acquires from DFM at a set interval and DFM in addition has a polling interval set to a number of minutes before it acquires new information from the storage systems.  With the scenario you laid out, yes... it would be approximately 25 minutes before WFA would recognize the new information on the storage systems.

For the current WFA 1.0.x versions there are no real recommendations we can provide for how to tune both DFM and WFA acquisitions.  Every customer situation is different and needs to be assesed before doing any tuning of the polling / acquisition intervals.

That said, this a limitation that the WFA team is aggressively working on addressing.  The details of how this is implemented may change, however, the direction is that for the next WFA release (WFA 1.1 targeted for Jan 2012) all filters and finders will be aware of previously run workflows even if DFM hasn't been updated yet.  This means that any new workflows will be able to "see" the objects created (and space consumed) by previous workflow runs in addition to the WFA cache from DFM.  In this way the new wofkflows should have a more complete picture of the customer's environment.

Hope this helps,


NetApp Alumni

Good news. Glad it's being worked on - shame there's nothing for me now, but at least I know to stop looking


Hi Kevin,

it will be nice in the future if it will be possible to create something "like" the finders that will just run commands on the storage and return values. Actual finders are nice to do some jobs (like find a aggregate with less overcommitment and a good amount of free storage) but it would also be nice to being able to interrogate the storage directly to being able to see if a volume or a snapshot already exist or return the last created snapshot without the need to pass through cache table. It can probably be done with some variation on the actual command functionality or with direct api access to netapp storage (to just being able to query for aggregate, volume, snashot, lun) "real time".

Another nice thing would be the possibility to force in some way DFM to "discover" new data with some workflow functionality (let me say doing a single discovery on volume or snapshot on a specific array).

I don't know how difficult it will be but let me say it would be probably a nice to have feature.


NetApp Alumni

to close this question off - WFA 1.1 introduced the ability to cache info on stuff it creates.

the caveat here is that you need to funnel all of the same kind of create/destroy events through WFA or you can still run into a "race condition" if, for example, an operator manually creates a volume  between WFA cache loads.


Also worth noting is that we now have the ability to trigger a DFM discovery process and wait on it to complete.

Might come in handy in some situations.

Just my two cents...



Nice, how do we trigger discovery?


There are 2 commands available in WFA for the storage scheme for the same.

Refresh monitors on array - Refreshes DataFabric Manager server monitors on an array so that new objects may be discovered by the DataFabric Manager.

Wait for monitor refresh - Waits for the DataFabric Manager server monitors to refresh so that object discovery may complete.