Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Failure in data acquisition for a custom data source type (csv)
2015-01-05
07:49 AM
10,181 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I was following the How-To Video for creating a custom data source type using csv file, and it was working fine, until I've deleted manually al data from the table, and now i'm getting the following error:
Incorrect Integer Value:'??' for column 'id' at row 1
The csv file is delimited with TABs and starting with "/N" just like the example file in the video.
Any ideas please?
Thanks
Roi Becidan.
Solved! See The Solution
1 ACCEPTED SOLUTION
RoiBeci has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Roi,
I was able to use data contents you provided and acquistion worked for me. So nothing wrong with the data in the csv file. You said you manually deleted/edited the CSV files, right? What was the editor you used? What is the Encoding setting for it? I think that is what has caused this. Looks like your editor uses Encoding : UCS-2 Big Endian or Little Endian or something else.
The error like: Incorrect Integer Value:'??' for column 'id' at row 1 or Incorrect integer value: 'N' for column 'id' at row 1 etc. indicate that the CSV file has incorrect Encoding. The correct encoding required is : UTF-8 without BOM ( Byte Order Mark ) or ANSI.
Now to fix it, You can use a text editor like Notepad++ open this CSV file and From Tab button Encoding-> Convert to UTF-8 without BOM. Save the file and acquire again. I think this will work. You can use other editors too, but you need to do the same thing i.e. change the Encoding type. Or send me the file by email and I'll return u with the fix.
warm regards,
sinhaa
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
15 REPLIES 15
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you try resetting the schema. That will remove all previous data.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@ The csv file is delimited with TABs and starting with "/N" just like the example file in the video.
-------
It should be \N and not /N at the begnning of a row in CSV files. So make this change, reset the scheme if you want and try to acquire again.
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
It was a typo, the CSV file starts with "\N".
I've also tried to reset the sceme but it aint worked.
Here is the content of my csv file (called "vsm_issues.csv"):
\N fas3070-1.ps.lab court1 fas3070-2 court1 destination volume too small; it must be equal to or larger than the source volume 144.22:06:56 \N fas3070-1.ps.lab court2 fas3070-2 court2 destination volume too small; it must be equal to or larger than the source volume 144.22:06:56 \N 10.68.65.14 vol4 fas3070-2 vol4 destination volume too small; it must be equal to or larger than the source volume 7.23:55:45
I appreciate your help.
Roi Becidan.
RoiBeci has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Roi,
I was able to use data contents you provided and acquistion worked for me. So nothing wrong with the data in the csv file. You said you manually deleted/edited the CSV files, right? What was the editor you used? What is the Encoding setting for it? I think that is what has caused this. Looks like your editor uses Encoding : UCS-2 Big Endian or Little Endian or something else.
The error like: Incorrect Integer Value:'??' for column 'id' at row 1 or Incorrect integer value: 'N' for column 'id' at row 1 etc. indicate that the CSV file has incorrect Encoding. The correct encoding required is : UTF-8 without BOM ( Byte Order Mark ) or ANSI.
Now to fix it, You can use a text editor like Notepad++ open this CSV file and From Tab button Encoding-> Convert to UTF-8 without BOM. Save the file and acquire again. I think this will work. You can use other editors too, but you need to do the same thing i.e. change the Encoding type. Or send me the file by email and I'll return u with the fix.
warm regards,
sinhaa
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
SINHAA,
That was it!
I was creating this csv file using powershell script. Now I need to convert the output file to UTF-8 without BOM.
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Rio,
You can create CSV files using powershell too without the need to convert the result file into UTF-8 without BOM. You need to use -Encoding Byte option. Somthing like this below code example.
$UserFile = "./Users2.csv"
New-Item -Path $UserFile -type file -Force
Add-Content $UserFile ([Byte[]][Char[]] "`\N`tabhi`tsinhaa`n") -Encoding Byte
Add-Content $UserFile ([Byte[]][Char[]] "`\N`t$first_name`t$last_name`n") -Encoding Byte
sinhaa
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
nice one..
thanks you I really appreciate your help 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How can I actually import UTF8 characters into the database using this technique? I'm trying to build a data source for active directory. We have a lots of users and groups having scandinavian characters in their names and I have yet to find a way how to import those.
If I use -Encoding Byte it obviously fails and if I try to use -Encoding UTF-8 I get all sorts of errors like "Incorrect integer value: 'N' for column 'id' at row 1" or "Incorrect integer value: '500' for column 'id' at row 1" depending if I have \N or a numeric id in the first field of the file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It took me many failures like you are facing before I've figured out a way to get this done. I've tried with Danish, Sweedish and Latin letters and they all get acquired fine.
Wait till tomorrow, I'll post it with details along with an example.
Its night time in my country and I had a long day.
sinhaa
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you post some examples of such words you would like to acquire? I'm trying to give sample code based on some real world names.
sinhaa
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for taking a look into this. Some example words: Järvinen, ylläpito, käyttäjähallinta, henkilökunta. I hope you see the letters correctly?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Check out my post: How to acquire Non-English European characters in WFA using script based acquistion: Solution
This will solve your requirement.
warm regards,
Abhishek Sinha
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Alternatively, I would like to drop this DB from the MySQL WFA server, but when i'm trying to drop the table I'm getting "access denied" message.
I am using TOAD for mysql explorer, and connected to "localhost" using wfa\Wfa123.
Is there a root user for the mysql WFA Server which can drop my custom made DB?
Roi.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This user wfa\Wfa123 doesn't have permission to drop a database.
So if you want to drop this database/schema , you need to delete the dictionary from WFA UI.
If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
