Active IQ Unified Manager Discussions

Updated Direct 7mode And CDOT Script Data Sources


Here are updated versions of my direct data sources I put together (communities doesn't seem to let me edit the existing article) - the 20150703 update adds support for WFA 3.0 and provides much more complete cDOT (cm_storage), including cDOT 8.3 features. The 20150728 update fixes storage pool support. The 20150802 update adds support for WFA 3.1RC1 and fixes failover_group support in WFA 3.0.


Not everyone has pre-existing OCUM and installing and configuring OCUM for both 7mode and cDOT is not necessarily a small task. These datasource types populate the standard storage and cm_storage schemas which can then be used as normal with many workflows.


The WFA 3.0 cDOT direct data source is almost complete. It now populates the following: disk, disk_aggregate, efficiency_policy. It includes cDOT 8.3 information added in WFA 3.0: broadcast_domain, failover_group, ipspace, storage_pool*. There may be some minor gaps in some of the data - please let me know if you identify anything. (Note: The WFA 2.2* cDOT data source is missing all of these items).


The WFA 3.1RC1 cDOT dats source adds support for cluster_peer, vserver_peer and SVM DR, and all new fields added in 3.1RC1 are populated. It also populates cifs domain information.


The 7-mode source is less complete. It contains: array, vfiler, aggregate, interface, volume, vsm, qtree, lun. It is missing: array_license, cifs_share, cifs_share_acl, dataset, disk, igroup, igroup_initiator, lunmap, nfs_export, object_comment, quota, snapshot, snapvault, user_quota.


Configuration is simple:

  1. Provide credentials for the arrays/clusters in WFA credentials.
  2. Add datasource - select version appropriate to your WFA version (the 7-mode source works with 2.2-3.0) which the cDOT one is different for 2.2 and 3.0.
  3. Set the Hostname of the datasource to a comma separated list of array or cluster admin names or addresses, or set up one datasource per array/cluster.
  4. Configure the interval to something reasonable (say 15 or 30 minutes)
  5. Increase the timeout for larger environments (in my testing, collection takes 15-90s per array/cluster - it may take longer when there are large numbers of objects or if there is significant latency between the WFA system and the array/cluster)

 There are some flags that can be passed in (comma separated) through the data source's "User name" field:

  • strict - fail collection if any error occurs (default if there is only one array/cluster)
  • nostrict - don't fail connection if an error occurs (for a single array/cluster datasource)
  • debug - log more information
  • log=<logpath> - where to log (defaults to ..\..\log\direct_<schema>.log - typically C:\Program Files\NetApp\WFA\jboss\standalone\log\)
  • timeout=<timeout_in_milliseconds> - API timeout (defaults to 180000)

The Password, Database and Port fields are unused.


Reservations, etc, should work as normal. This has been developed against WFA 2.2RC1, 2.2, 2.2.1, 3.0RC1 and 3.0 and will likely require updates to work with future versions.


Upgrades from previous versions should be seamless.


Changes from previous version:

  • Added WFA 3.0 schema support
  • Accomodate changed mysql behavior in WFA 3.0
  • Single array/cluster is now automatically strict
  • Added nostrict parameter
  • 7-mode direct data source handles pre 7.3.3 systems
  • 7-mode direct data source doesn't fail if vfiler stopped/inconsistent
  • Fix for storage_pools in 3.0 cDOT data source (20150728)
  • Fix for broadcast_domain (3.0/3.1RC1)
  • Added support for WFA 3.1RC1, populates all new schema tables and fields.
  • No longer imports into WFA 2.2RC1


The attachment doesn't unzip as a .dar file like the prior version did. Is there another way to load this into WFA besides import?


Hi Richard,


The dar file is directly attached, not wrapped in a zip as the previous communities site used to do. You should be able to directly import it into WFA.


Note that some browsers may save the file with a zip extension (I've seen IE do this - dar files are just zip files anyway), so you may need to rename the file to .dar first.





I get a forehead smack for that one. IE did save it as a zip and I just took it at its word. Downloaded with Firefox and it worked fine. Thanks!


After upgrading to WFA3.0 and importing this Data Siurce Type, I created new data source with Data Source Type "Direct Clustered ONTAP - WFA 3.0".

Acquiring the data source fails with error "Row 1 doesn't contain data for all columns".

Any help would be greatly appreciated.




Hi Narendra,


The updated version just posted will address your issue, as well as make the data more complete and include the new cDOT 8.3 data items.





AWESOME !! Thank you so much !!

After importing this dar file, I don't see error now on acquire data source.


Updated version - adds support for WFA 3.1RC1, SVM DR, and fixes some bugs.


Using the directDataSources_20150802.dar versions I discovered two places in the cDOT WFA 3.1RC1 data source where the broadcast_domain_id attribute wasn't being loaded properly (the "failover_group" and "port" tables). The "port" table problem could cause some workflows to not function properly. I discovered the problem trying to use the one of the "Create SVM" workflows. 


I updated the data source (from 1.0.1 to 1.0.2) and am attaching the updated version of it edited and tested under WFA3.1P2. I've attached the dar file and a txt of the PoSH code. 



Added missing service-is-up checks for NFS, iSCSI and FCP (already had CIFS). Bumped version to 1.0.3. Tested on WFA


Hi Jason8,


Our WFA 4.0 has Tim Kleingeld PS script imported and used as a cirect cmode datasource type but im getting error  Error on clustername [last action 'Collecting fcp port data']: There is an error in XML document (1, 4247). You mention your updated DAR checks FCP now and i was wondering if it would correct my problem? Do you know how i can import your DAR without overwritting the exisitng one because other controllers are using it?






I don't think you can import the two different versions of the dar to run them in parallel. The later versions will update/overwrite the previous version. I suggest you spin up another WFA instance and test v1.0.3 to see if it resolves your issue.