Row By Row insert

  • Hi guys,

    I have this issue for years and have never been able to resolve it in a proper way. And I never asked you before. Even if I should have....

    Here is my problem:

    I need to insert in 100+ tables values derived from the same tables but generating other unique primary keys.

    I generated those 100+ scritps to validate them and insert data in a copy of prod db.

    Basically I have tables like :

    keys = entity_key, sub_entity_key, object_key

    + fields...

    and I need to insert the same fields but for other entities, sub entities, same object_key

    so this will look like :

    insert into dbo.MyMasterDataTable

    select (proccess to build new entity_key), (process to build new sub_entity_key), same object_key, same fields.

    Now we have triggers on those tables, and some of them are designed to accept multiple values and some not. If it were me, all of them would allow multiple inserted lines, but I have to deal with what I have.

    Do you know a way to execute those insert statements row by row in order to fire each trigger.

    I know that SSIS can do such things, I set up one case (control flow + data flow with OLE DB Source and OLEDB Destination + mapping of table + manually change the property FastLoadOptions and add FIRE_TRIGGERS) . but I have 100+ of scripts...

    Transforming each "insert ... select..." into cursor requires also way too much time.

    So any idea, tips will be greatley appreciated.

    Thank you for your help and expertise

    Dirk

  • If you want to have a row based operation, you need to use a cursor and obviously it will be slower than a set based operation. You can either change the trigger as you told or disable the trigger completely and bring the logic of finding the new ids into the procedures where you insert the data.

    ..

  • thank you for your help

    I'll review the cursor option on specific tables.

  • I actually generate a database to simulate 5 years of production (about 150GB for the data file).

    I started with only one packet of data (23 items).

    I first generated the amount of data corresponding to 1 day in temp tables (Tmp_MyTable, 99600 items), and then, after, multiply the amout for each day trought a while loop with the kind of algorithm describe under.

    Here is a short sum up of the way i use:

    -add all source identifiers to a variable table (@Srce)

    -generated all needed dates (no week-ends, no free days - like july 4 for the US 😉 )

    while ( my dates table is not empty )

    top n from dates into a stage table (@Stage)

    delete theses dates, so the while condition will finally be false (after some loops of course)

    into a temp relationnal table (@StageRel), make a cross join beetwen @Srce and @Stage with the source identifier AND a generated new id

    insert in the destination tables (with the new generated identifier above) from Tmp_MyTable inner join @StageRel on source identifier

    (and why not comme computed values bases on the "new id" )

    delete any temp data as needed

    end while

    edit: ahem, sorry, my post is a little bit out of subject. :blush:

    I noticed the need to generate new ids/keys, but not the existence of triggers (trigger is bad, trigger is evil, trigger is slow 😉 ) and SSIS (which i don't know well).

    If you disabled your triggers, you should use the way i explained above to not used cursor ( cursor is bad, cur... mmm ? okay, i shut up 😉 ), but it's let you the work to re-write the logic contained in triggers.

    Do you think you should list all the tables to perfom in a table and so, trought a while apply the same query on that table ?

    I thought about using synonyms : you set the synonyms, you apply the 'generic' query on it, and loop on tables to perform.


    My MCP Transcript (ID : 692471 Access : 109741229)

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply