load data and foreign keys

  • Look, I wasn't there, and likely won't ever know what you were really up against, so I won't argue this point with you anymore. I am well aware of multiple ways to delete large numbers of rows from large tables, but suffice it to say that 30K is not a large amount of rows and 5MM rows is not a large table. You had something else going on that you were not aware of.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Z1024 (3/31/2012)


    Whatever

    finally we can get back to the OP's question if you have not scared them off.

    on a side note: there is a table in our database approaching 900 million rows and its our history table.


    For faster help in answering any problems Please read How to post data/code on a forum to get the best help - Jeff Moden[/url] for the best way to ask your question.

    For performance Issues see how we like them posted here: How to Post Performance Problems - Gail Shaw[/url]

    Need to Split some strings? Jeff Moden's DelimitedSplit8K[/url]
    Jeff Moden's Cross tab and Pivots Part 1[/url]
    Jeff Moden's Cross tab and Pivots Part 2[/url]

  • Z1024 (3/31/2012)


    As you can see, this is a very common problem on at least SQL Server, Oracle and MySQL and the usually recommended solution worked fine.

    mmmhh... not really, are you aware of the LIMIT clause on MySQL? 😀

    Either way, as other poster have already pointed out that is not a large table and you are not deleting such a larger number of rows.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • PaulB-TheOneAndOnly (3/31/2012)


    Z1024 (3/31/2012)


    As you can see, this is a very common problem on at least SQL Server, Oracle and MySQL and the usually recommended solution worked fine.

    mmmhh... not really, are you aware of the LIMIT clause on MySQL? 😀

    And your point is?

    PaulB-TheOneAndOnly (3/31/2012)


    Either way, as other poster have already pointed out that is not a large table and you are not deleting such a larger number of rows.

    You can't define "large table" without taking into account the hardware specs of the server. For that particular server it was a large table alright.

    If it was an 8 core 48GB RAM running a x64 OS and a 64bit SQL Server where the entire DB would fit into the RAM several times - of course it wouldn't be a large table for that config.

  • opc.three (3/30/2012)


    Z1024 (3/30/2012)


    opc.three (3/30/2012)


    Z1024 (3/29/2012)


    For tables with millions of rows you should really be careful with indexes, It can take literally hours (on commodity hardware) to delete say 10k rows from a heavily indexed table. In that case I'd go as far as dropping the indexes, doing you deletes/updates and recreating the indexes. It would still take less time and you'd get your indexes "defragmented" or rebuilt as a "free" bonus.

    "Literally hours" sounds "literally ridiculous". The hours part of it had to be attributed to SQL Server finding the rows in the first place, i.e. the JOIN or WHERE clause on the DELETE query was what was taking hours, not the actual deletion of the rows.

    Sorry, but I'm talking from my actual experience. On a client's DB deleting about 30k records from a ~5mil records history table was taking hours. Maybe more, because they never let it finish. And no, joins had nothing to do with these "literally ridiculous" deletion times. This is a well known problem affecting not only SQL Server but probably any other RDBMS as well.

    Basically when SQL Server deletes records from a table it places them in the transaction log and this slows down the deletion process dramatically. Indexes are good for selects and joins but in this case they are more of a burden.

    The solution is to delete records in smaller batches, about 1k rows each, and commit after each delete. Dropping indexes (that were not used in the delete join) further improved the deletion times. In the end it took about 10 or 20 min if memory serves me well.

    Granted, the hardware was very old, a 32bit OS with only 2 or 3GB of RAM, but still, this is a common problem so even better hardware would choke on a similar task, only the numbers would be bigger - say 100mil records table, 500k rows to be deleted.

    Sorry, but you must have been running into something else. The mechanics of deleting 10K rows will not take hours, seriously, no matter how lowly the hardware. You were either fighting against a bad WHERE-clause on your delete, major fragmentation, a trigger, or something else you were not aware of.

    I went through the same thing. Turned out to be a nasty combination of a shedload of indexes and replication. Took nearly an hour for a 10k row delete.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Z1024 (4/1/2012)


    PaulB-TheOneAndOnly (3/31/2012)


    Z1024 (3/31/2012)


    As you can see, this is a very common problem on at least SQL Server, Oracle and MySQL and the usually recommended solution worked fine.

    mmmhh... not really, are you aware of the LIMIT clause on MySQL? 😀

    And your point is?

    PaulB-TheOneAndOnly (3/31/2012)


    Either way, as other poster have already pointed out that is not a large table and you are not deleting such a larger number of rows.

    You can't define "large table" without taking into account the hardware specs of the server. For that particular server it was a large table alright.

    If it was an 8 core 48GB RAM running a x64 OS and a 64bit SQL Server where the entire DB would fit into the RAM several times - of course it wouldn't be a large table for that config.

    i see; you know it all so there is no reason to bother you. Have a good life.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply