Check Source for Data Prior to Truncation

  • Is there a way to check the source to ensure data is available before truncating the data in the destination table?

  • What's your source?

    If it's a table, you could do a select (SQL Task), if it's a file you could check for the file size (Script task).

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Thanks for the quick response. Unfortunately, my source is an SAP BW Open Hub. I was hoping there was a to run the first part of my process that pulls the data from BW, check for data (record count) which I have, but do this prior to truncating the data in the SQL destination table.

  • There are multiple ways to implement this

    1.First is record count check ?

    2.We can implement some bechmarking check between target and staging table like record count %difference between target and stagging table(that should be configurable in your package config or in database table so you can change it whenever you want)

    3.You can move target table data to archive table before truncating it. you don't need to keep all historical data on archive table.. just keep it for 1 week data.

  • I solved my issue. I created an Execute SQL Task that generates a temp table. In the event the Data Flow process fails, I have another SQL task that will reload my table with the previous data. The temp table is then dropped.

  • shannon.proctor (2/27/2014)


    I solved my issue. I created an Execute SQL Task that generates a temp table. In the event the Data Flow process fails, I have another SQL task that will reload my table with the previous data. The temp table is then dropped.

    Although this might work, it sounds like you are doing some unnecessary work.

    From your description of the problem, it sounds like you want to reload (truncate/insert?) a table only if the data source extract is successful.

    As you are able to truncate your table, I am guessing that there are no foreign keys involved. So one solution is to have two tables, each with exactly the same structure (columns, keys etc): t1 and t1_x (or whatever).

    t1 is your main table.

    Your import process does the following:

    1) Truncate t1_x

    2) Load data to t1_x

    3) If (load successful)

    a) Rename t1 to t1_xx

    b) Rename t1_x to t1

    c) Rename t1_xx to t1_x

    And that's it. Ready for next load without having to do a reload in the event of failure.

    A different twist on this would be to have users access the table indirectly via a VIEW and to ALTER the view definition depending on which table holds the best data.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Use a set of staging tables either in the same database (under a different schema for ease of identification and for naming purposes - can use the same table names) or in a different database. If the ETL from Source to Staging is successful, truncate the destination table. If not, don't.

    I think that's basically what Phil is saying in not so many words.

    Regards

    Lempster

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply