Problem with log file

  • Hi people from all corners off the world!

    I have a small problem that someone might have an answer to...

    Well i have to do a big (BIG!!!) delete function (about 1500 billion posts or something) with a calculation attached to it and i would like todo a "set recovery off" in sql 2005, so i dont get a logfile that is killing my harddrive. Any sugestions would be very appritiated.

    //Jonas

  • I would offer two potential solutions:

    1) backup database

    set database to simple recovery

    do your deletes

    reset database back to full recovery

    2) if the number of rows to remain in the database AFTER the delete is relatively small,

    select rows to keep into a new table

    drop old table

    rename new table

    Hope this helps.

    Take care,

    Bert

    "Speculations? I know nothing about speculations. I'm resting on certainties. I know that my Redeemer lives, and because He lives, I shall live also." - Michael Faraday

  • How many rows are in the table now? How many will still be there after the delete? Are there any foreign keys on this table or referencing this table?

    If the number of rows remaining is much less than the number deleting, and there are no foreign keys, or they can be easily removed and replaced without loss of integrity, then your best bet will be to copy out the rows you want to keep to a holding table, truncate the table and then replace the rows that you copied out.

    Beware if you have an identity column that will cause havoc with the identity seed.

    If you have to use delete, you'll probably want to do in batches (see my post in another thread here) with the db in simple recovery mode and either a delay or a checkpoint in between the deletes (to allow truncation of the log)

    Make sure you do a full db backup afterwards (and before if possible)

    Even in batching, that volume of deletes will probably take a very long time (many hours)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks all for the quick response. Now to some answering of questions, unfortunatly there will be plenty rows left in table after delete :doze: and there are foreign keys involved. But i will try youre solutions and see what happens.

    Thanks for the help and iall keep you updated what worked 😉

    //Jonas

  • May I suggest, once you've got this mess sorted out, considering table partitioning for such a large table? That way you'll have several smaller tables that appear to be a single table. If you get the partition scheme correct (ie the column you partition on is part of most where clauses) then your queries will run quite a bit faster since the optimiser can eliminate entire partitions from consideration.

    Also deleting an entire partition is a meta-data operation, and is very fast.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • jonas.jillerholm (9/27/2007)


    Thanks all for the quick response. Now to some answering of questions, unfortunatly there will be plenty rows left in table after delete :doze: and there are foreign keys involved. But i will try youre solutions and see what happens.

    Thanks for the help and iall keep you updated what worked 😉

    //Jonas

    heheh ... I would really like to know how many more rows you have left ... 😀


    * Noel

  • Well it was only 1.370.000 ish rows to keep and 31 billion rows to get rid off so it worked pretty well with making temp table...:D

    // Jonas

  • Great, thanks for the feedback!


    * Noel

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply