Effective December 3, NetApp adopts Microsoft’s Business-to-Customer (B2C) identity management to simplify and provide secure access to NetApp resources.
For accounts that did not pre-register (prior to Dec 3), access to your NetApp data may take up to 1 hour as your legacy NSS ID is synchronized to the new B2C identity.
To learn more, read the FAQ and watch the video.
Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.

Active IQ Unified Manager Discussions

How to create an own data source

skellner

I want to create an own data source in wfa. It contains IP addresses, interfaces, ipspaces and netmasks for new vfilers. I already created a dictionary object and the data source object. Now I'm searching for a code example in powershell to import an excel file into the cache database. Any hints or code examples are highly appreciated.

Thanks in advance

1 ACCEPTED SOLUTION

goodrum

I just published the new How-To video for this.  I ran out of time to get it posted this past Friday.  Please take a look and let me know if you have any other questions.  This should get you off the ground.

https://communities.netapp.com/videos/3351

Jeremy Goodrum, NetApp

The Pirate

Twitter: @virtpirate

Blog: www.virtpirate.com

View solution in original post

22 REPLIES 22

brycedeutsche

Hi Goodrum

Very useful video thanks. How many lines (entries) does the WFA database support? I have an input of about 80K entries but don't seem to see all the data after I acquire from the data source.

Regards

B

goodrum

I am not sure as I have never tried to test that many entries.  How are you validating that you are not seeing the data?  Are you accessing MySQL directly to do a row count?

Jeremy Goodrum, NetApp

The Pirate

Twitter: @virtpirate

Blog: www.virtpirate.com

brycedeutsche

Hi Jeremy

Not having a lot of luck reading the database – I receive the following error

Can the database be opened in SQLDeveloper?

I did see the line count @ 9999

Regards

Bryce

Bryce Martin

GT Engineering

6/8 Bishopsgate, EC2N 4DA London, United Kingdom

Tel: +44 (0) 20 754-53566

Mob: +44 (0)7914 540882

Email: Bryce.Martin@db.com<mailto:Bryce.Martin@db.com>

goodrum

You will need to enable remote MySQL connections and use the default wfa user - wfa/Wfa123

  1. Stop the WFA Database Service
  2. Edit 'C:\Program Files\NetApp\WFA\mysql\my.ini'
  3. Comment the last line - "bind-address=localhost" to "#bind-address=localhost"
  4. Save the file and close
  5. Start the WFA Database Service

Now you can access the database remotely and verify the row counts.  However, the comment about 9,999 row count sparked a memory.  The default results returned in a filter are limited to a 9,999 results and is hard coded.  Take a look at the previous communities post where we discussed this situation.

https://communities.netapp.com/thread/24521

Jeremy Goodrum, NetApp

The Pirate

Twitter: @virtpirate

Blog: www.virtpirate.com


brycedeutsche

Hi Jeremy

Still receive the same error which is a pity

However, thanks for the link – I am experiencing the same issue, so I have inserted a ‘search criteria’ field to narrow down the results and this works. The other alternative would be to have multiple queries based on region which would reduce the number of entries.

Still on the subject of Data Sources. Is it possible to run LDAP queries? For example, searching for user email addresses I’d like to run an LDAP query against Active directory – have you seen something similar done?

Regards

Bryce

Bryce Martin

GT Engineering

6/8 Bishopsgate, EC2N 4DA London, United Kingdom

Tel: +44 (0) 20 754-53566

Mob: +44 (0)7914 540882

Email: Bryce.Martin@db.com<mailto:Bryce.Martin@db.com>

goodrum

Remember, WFA Filters will only return the most relevant result and that is it.  The only time that this is not 100% the case is when you are using repeat rows.  The limitation is 9,999 rows which is hard set to provide the most relevant results while not causing a performance hit on collection.

If you are using these values as User Inputs, then it would theoretically take you forever to 'find the right value' if you even got back 9,999 values.  Leverage Filter Values in your queries.  For example:

SELECT username FROM ad.users

--------------------------------------------

Instead try

--------------------------------------------

SELECT username FROM ad.users WHERE username LIKE '%${filterQuery}%'

Now you have a new user input that acts as a filter point so you can provide any part of the name that you want to get the most relevant user inputs.

 

As far as building a Datasource based on AD or LDAP information, this is doable though nothing available to date based on what I have heard.  You could right a PoSH based script to gather the data as CSV and use the How-To video (referenced here) to create your own Datasource.

brycedeutsche

Hi Jeremy

I managed to get the LDAP Data Source working via script. All seemed ok with small files but when I run the full query I receive a 'The transaction is not active!' error.

On WFA 2.0 it does this after 310 secs, on v2.1 it will get further but will still timeout with, 'ARJUNA016063: The transaction is not active!'

I found I a query of 26,000 lines would work, but 44,000 and 86,000 lines timed out.

The content is pretty simple as it's just email addresses...e.g.

\N     joe.bloggs@company.com

..........

This is in tab delimited ascii format. Smaller files work but still take a very long time considering their size. 20 lines takes 36 seconds. The 26,000 line file was taking 877 seconds to complete, which seems to be very long for a single field.

Is this something you have seen before - would you consider that a large input?

Bryce

sinhaa

Hi Bryce,

      'ARJUNA016063: The transaction is not active!' indicates a time out by the Jboss transaction manager. Not sure if it will help or not but can you edit the following line in file.

  1. Open file standalone-full.xml located at WFA\jboss\standalone\configuration folder
  2. Find the line <coordinator-environment default-timeout="300"/> . The default timeout is set to 300 sec ( 5 Min )
  3. Modify this default-timeout to some larger value like 1000 sec.
  4. Restart NetApp WFA Server service.

Now try to acquire and see if it works.

warm regards,

sinhaa

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

brycedeutsche

Hi sinhaa

Thanks for the reply.

The WFA\jboss\standalone\configuration doesn't exist on my WFA v2.0 install - is variable in another file in WFA v2.0? Or any log files which would highlight the issue?

Regards

Bryce

sinhaa

Bryce,

    Send me a mail at the below id. I'll take this fwd via email.

My mail id : sinhaa at netapp dot com

warm regards,

Abhishek

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

skellner

I gave it a new try with a new scheme. Now wfa tries to import the data and I get the error data trucated for column id at row 1. However, I have no column id. Do I have to put one in my input file? Or in more general how does my input file should look like when I have the columns ip,interface,ipspace,netmask,hostname?

goodrum

I just published the new How-To video for this.  I ran out of time to get it posted this past Friday.  Please take a look and let me know if you have any other questions.  This should get you off the ground.

https://communities.netapp.com/videos/3351

Jeremy Goodrum, NetApp

The Pirate

Twitter: @virtpirate

Blog: www.virtpirate.com

View solution in original post

skellner

Hi Jeremy,

great video, thanks. I exported the data from an excel spread sheet as tab separated but it seems not to work. I get the error "data truncated at row 1". Do I have to add the \N at the beginning of each line in the file or is there any possibility to get the input file directly from the excel spread sheet? And can you post the ppt file from the video as well?

My file looks like this:

1    5.242.234.8    vif3-231    its    255.255.255.128    v998spnvv1013e

2    5.242.234.9    vif3-231    its    255.255.255.128    v998spnvv1020e

3    5.242.234.10    vif3-231    its    255.255.255.128    v998spnvv1029e

4    5.242.234.11    vif3-231    its    255.255.255.128    v998spnvv1032e

5    5.242.234.12    vif3-231    its    255.255.255.128    v998spnvv1049e

6    5.242.234.13    vif3-231    its    255.255.255.128    v998spnvv1056e

7    5.242.234.14    vif3-231    its    255.255.255.128    v998spnvv1058e

Best regards

Stefan Kellner

skellner

Got it. With \N it works.

goodrum

Stefan,

Thanks for checking out the video, I am happy to hear that it helped.  Yeah, the \N seems to be very important.  Also, you do not need to add an ID field in your file as it is automatically created with the import.  Please don't forget to mark this question as answered.

Jeremy Goodrum, NetApp

The Pirate

Twitter: @virtpirate

Blog: www.virtpirate.com

bdave

The first column seems to be a hidden column called 'id' for the primary key in the WFA database.  The \N tells mysql it's a null field, so mysql creates its own value.  You could put your own key in there instead of \N if you wanted to. 

View the Data Source script for the vc Scheme under "Data Source Types" --> "VMware vCenter" as an example.  This one script updates several dictionary objects at one time.  Note some of the tab delimited CSV files it creates have \N as the first column, while others use a unique identifier like the Host object ($hostFile) starts out with a $hostId that it gets from the (object returned from Get-VMHost)'s Id.GetHashCode() call.  If you look at the Host dictionary object, there is no HostID listed.  The dictionary object starts with the 2nd column in the CSV, the Name parameter.

Add-Content $hostFile ([byte[]][char[]] "$hostId`t$name`t$ip`t$os_version`t$virtualCenterIp`t$cluster`t$datacenterName`n") -Encoding Byte

I looked at the table structure in mysql, and confirmed the first column is the primary key for the table, and it doesn't map back to anything listed in the dictionary object definition.

mysql> use vc;
Database changed
mysql> show tables;
+-----------------+
| Tables_in_vc    |
+-----------------+
| data_store      |
| host            |
| lun             |
| nas_share       |
| virtual_disk    |
| virtual_machine |
+-----------------+
6 rows in set (0.00 sec)

mysql> show columns from host;
+-------------------+--------------+------+-----+---------+----------------+
| Field             | Type         | Null | Key | Default | Extra          |
+-------------------+--------------+------+-----+---------+----------------+
| id                | int(11)      | NO   | PRI | NULL    | auto_increment |
| name              | varchar(255) | NO   | MUL | NULL    |                |
| ip                | varchar(255) | YES  |     | NULL    |                |
| os_version        | varchar(255) | YES  |     | NULL    |                |
| virtual_center_ip | varchar(255) | NO   |     | NULL    |                |
| cluster_name      | varchar(255) | YES  |     | NULL    |                |
| data_center_name  | varchar(255) | YES  |     | NULL    |                |
+-------------------+--------------+------+-----+---------+----------------+
7 rows in set (0.03 sec)

mysql>

Hope this helps,

Dave

MAHESH1111111

Hi Dave,

          I'm getting a 'Error running data acquisition script: prefix string too short' error when i added the csv file as a datasource. i followed all the steps provided in the Video, but at last it failed. Can you please help?

Thanks

Mahesh.P

goodrum

I have not run into this error before.  Have you tried to follow the steps in the video with no customization to ensure that you have them all?  I covered all of the steps (https://communities.netapp.com/message/107420#107420) in an earlier part of this thread.  The script that I used was very simple.  It had WFA copy the csv file (Tab delimited) to the tmp location (%INSTALLDIR%\server\jboss\tmp\wfa) and then import.  The script was pretty basic.  The dictionary and the csv file needed to be the same name and you need to ensure that there are an equal number of tabs (including the first \N for the primary key) for every dictionary row.

It might help to provide a sample of what you did in an effort to better understand the current issue.

Jeremy Goodrum, NetApp

The Pirate

Twitter: @virtpirate

Blog: www.virtpirate.com

MAHESH1111111

Hello Goodrum.

                         Now it works. i think i missed some part of the procedure. Thanks

Regards

Mahesh.P

dekel

hi,

Are you using WFA 2.0.1 ?

In WFA 2.0 there used to be a bug where the length of the custom schema name has to be at least 3 character long

,

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public