Importing .CSV from Symantec Q&A v4.0 (legacy flat dbms)

  • Hello all, first-time poster. I wasn't sure if this belongs here or under data corruption but opted for this one...here goes:

    I'm using the 'import/export wizard' in SSMS and, in the exported (from Q&A) .CSV file, I have tried:

    using commas and quotation marks

    using semicolons and quotation marks

    changing file to a plain text (.txt) file

    changing file to a file with no extension (rename 'clients.txt' to 'clients')

    On the SSMS side I have tried many permutations of the following configuration options:

    Data source: Flat File Source

    Locale: English (US)

    Unicode: both selected and unselected

    Code Page: US ASCII, 12-ANSI (was exported in what Q&A called 'Standard-ASCII')

    Text Qualifier: double- and single-quotes

    DataType (on comments field, which is never more than about 100 chars) DT_STR, DT_TEXT

    And get the following error:

    0xc0202a1 data conversion for column "col" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page."

    I have gotten a few different errors, depending on the flat file, and with a few of the flat db files I can get them to import as they should. Also of note, on the ones that won't import into SQL, the "preview" shows the correct values going into the correct fields and we're talking 3000+ records per db here so it would be out of the question to manually hunt down bad fields. Is there possibly a char like '#' or '@' throwing the import wizard off that I can use a find-all and replace? Can I do anything differently in T-SQL/Bulk INSERT that will override this error and just leave a field or two blank and import the majority of the flat file without error? Thank you all for your thoughts and comments, I look forward to resolving this issue soon.

    PS: SSMS v.9.00.1399.00 OS Microsoft v.6.0.6001 (WS2K8 Ent.) .NET F/W v.2.0.50727.1891 If any of that's useful. Also of note, I'm trying to append an existing table if possible, though I have also tried creating a new table with the thoughts of then writing a webpage to port the values over to the existing .dbo. I also suspect that the users of this antiquated Q&A program may have used quotation marks in some of the 'comments' values (e.g. "spoke with Mary's boss, will c/b on 1-3-13"), but again, with 3000+ records there's not a lot I can do for that. It's worth mentioning that I haven't seen any like that as of yet. I KNOW that commas are used, especially in company names with an "LLC" in them, hence the text qualifiers.

  • When you are running the import and you get to the 'Select source tables and views' screen have you tried to edit the mappings to match your existing table?

    You could also try opening your .csv in Excel and change the data types there. I've had better luck importing smaller data sets from Excel than text files or Access.

    If you are importing into a production database are you doing a backup first? 🙂 Better yet, create a test copy of your database. It seems every jr. person we've had learns this the hard way...

  • When you are running the import and you get to the 'Select source tables and views' screen have you tried to edit the mappings to match your existing table?

    Not sure if I'm doing it correctly, but what I did was edited the first line of the .CSV/.TXT file and put the col names of the recipient .dbo in double-/single-quotes to match the table I'm appending. There are more columns in the destination table than there are in the source table. Also tried creating a new table to no avail, says it fails on data row 6 which has commas within the qualifiers but otherwise is well under char limit (160 or so I believe).

    You could also try opening your .csv in Excel and change the data types there. I've had better luck importing smaller data sets from Excel than text files or Access.

    ...hadn't occurred to me to try that, good idea.

    If you are importing into a production database are you doing a backup first? [Smile] Better yet, create a test copy of your database. It seems every jr. person we've had learns this the hard way...

    Weekly Fulls, Nightly Diffs, Hourly T-Logs 🙂 we're in the medical IT business so it's a must. Actually this one is a test DB while it's in development...still working on the front-end (ColdFusion8) but wanted some real data in the database, and a decent amount, instead of the 6 or so records I'll key in and then get lazy, if I have to do it manually. And when it's ready to go to production all the Q&A DBs will need to come over. Source DOS systems are backed up, yes. Thanks for checking though!

  • Okay so I open the .CSV with OpenOffice Calc and all columns looked great. Tried to import into a new table in SSMS and got:

    error 0xc020901c: Data flow task: there was an error with output column "F1" (15) on output "Excel Source Output" (9). The column status returned was:"Text was truncated or one or more characters had no match in the target code page.".

    Of Note: When I open the .CSV w/ OO Calc, the "Text Import" dialog pops open to ask what the delimiter and text qualifiers are. It had semicolon and space checkmarked so I unchecked those and changed it to comma...most intriguing though is that the charset indicated is "Western Europe (Windows-1252/WinLatin 1)". Should it be a US charset to import properly on a server that has all its software registered in the US? Not sure if relevant but there it is. Also, the Q&A Software may well have come from overseas, it's in English but also offers the option of Deutsch in the config screen. I thought 1252 was a universal standard but then I don't really know much about those sort of things. Any thoughts on this development?

  • Further of note: It seems to have worked (somewhat) in that of the 2081 records in the .CSV, 1349 were imported. But I'm still missing about 1/3 so this isn't the solution...but looks like we're getting there.

    Thank you again for all your suggestions and I look forward to more, check back soon!

  • Sorry, I don't have much time today to help, but there's one thing... have you looked at the records that were successful and compared them to the ones that were not? Maybe there is something obviously different...

  • BR.GRIFFITH (1/16/2014)


    Can I do anything differently in T-SQL/Bulk INSERT that will override this error and just leave a field or two blank and import the majority of the flat file without error?

    Yep. Take a look at the following URL. Set MAXERRORS to some ridiculously high number like 2 billion and setup the ERRORFILE parameter. You should read about it but, to summarize, bad rows will be sequestered in one file and errors will be listed in another (although some of the errors a bit cryptic) and any good rows will successfully be delivered to your target table (which absolutely should be a staging table and NOT the final table the data will live in for more reasons than I have room to print on this post).

    http://technet.microsoft.com/en-us/library/ms188365(v=sql.90).aspx

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Are you trying to load the data into a SQL table that already exists, with slightly different column definitions ?

    I usually let the import wizard build a table based on the incoming data. It may create columns such as varchar(255) or so. Then after it's in SQL, I find it easier to do "cleanup" and copy the data into another table with more specific column definitions.

  • Thank you all for the replies...I'm seeming to have luck with importing the .CSV into OpenOffice Calc, then importing into a new SQL table, then using a web application to pull from the new and insert into the test table with the add'l fields. It appears that some of the errors I was getting may just have had to do w/ the text standard this legacy system uses when you export the data. As of now I have a healthy amount of data in my testbed SQL instance and so I think I'm good-to-go and can consider this resolved. Thank you all again for all your help and suggestions!

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply