Is there a way to make DELETEs faster?

  • Hi there,

    Is there a faster way to delete 1 million rows? The stored proc I'm currently using is just a simple DELETE FROM statement but deleting a million rows takes a lot of time. I'm just considering other options on how to make this faster. A suggestion I had was to use indexing. What are your thoughts? and do you have other suggestions?

    Thanks so much!

    Rafferty

  • if you are not using a where clause , you could use "truncate". see BOL

    Try to perform many small chunks of transactional deletes to avoid blocking.

    SET ROWCOUNT 1000

    declare @Nrows int

    set @Nrows = 1

    while @Nrows > 0

    begin

     begin tran MyDeletes

     DELETE TableName WHERE <Your Condition Here>

     SET @Nrows = @@rowcount

     commit tran MyDeletes     

    end

     

    Offcourse any indexing support for your conditions will help !

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • TRUNCATE TABLE dbo.YourTable

    However, the user needs to be in the DDL_Admin role or higher and you can't GRANT permissions for TRUNCATE.

    The small chunks example works well and is good for mass updates as well.

     

  • wow, now that's a stored proc that I haven't seen before. I also got this suggestion:

    "rename the old table, create a new table with same schema and indexes, then drop the old table"

    so which do you think is better?

    btw, I'll be having a conditional statement on which rows to delete so simply truncating a table won't work

  • Not sure what will happen to your existing stored procedures if you follow the renaming route.  I suspect that every stored procedure using your table will recompile so there could be wider performance issues.

    If you have views on your table then you may have to sp_refreshview.

    I could be wrong, but that would be my take on the rename and drop idea.

  • hm.. then I guess my best option is to do what alzdba suggested because truncate does not support using conditions correct?

  • truncate clears the whole table and frees the pages in use by the table.

    Delete is a logged operation and can have conditions. pages in use by the table are freed later on (background ops).

    The rename scenario would consume 2times the needed space. sp-recompiles may be needed anyway, if your statistics are gathered again.

    IMO sp_updatestats and dbcc updateusage (0) with count_rows are needed from time to time because actual statistics may not be what you expect them to be.

     

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Another thing to consider is WHEN you are doing the 1mm deletes....  IF you are doing it during the timeframe when users are in the system it will slowdown your entire system, during backups/maintenance could crash those.  I would shoot for something after 11PM and before 3AM to perform this when the system is nice and quiet and then BOOM start deletes...



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • You're right, we should consider that. Also, the deletes in our database will only take place at least every month so the effect on the users will be very minimal.

  • If this could occur during a window when there is no other activity, I would consider switching into 'Simple' (non-logged) mode, delete, the switch back into logged mode. Depending upon the table size, dropping the Indexes first and re-building afterward would help quite a bit.

  • I'm not sure that switching to 'Simple' mode would have the desired effect.

    All activity is logged, all simple mode does is truncate the log by removing checkpointed transactions.  I would expect 'Simple' mode to be slower because this truncation has to take place.

  • If you do go down the truncate table route, be aware that identity columns are re-seeded.

    Might there be some merit in droppping any indexes before the delete and rebuilding them afterwards? This would mean that the delete did not need to update the indexes as well, and the rebuilt index would be less fragmented.... I think.

     

    Bill.

  • True, however, if done is a window of no activity, hopefully, there are NO open transactions, and the log would be truncated and flushed rather quickly. I do this quite often and the speed is very worthwhile with large deletion sets from very large databases.

  • When you say "no other activity" this means that no one's using the entire database right? I don't think this'll work as I'm working on a real-time data.

    Or does it mean that no one's using the table?

  • Depending on how much data will be left in your table (I did this once to delete 14 mill and be left with 1 mil records and it worked nicely). Do a insert into a new table with the exact structure as the existing. Insert only the data you want kept. Then truncate (do not drop and recreate) the old table and select the data back into the old. The select is the only fully logged part so it is slow, but it will be faster if you are selecting back fewer records than deleting.

    If your keeping lots of data why not bcp out the data you want to keep, then truncate, then bcp back in?

    Good Luck

Viewing 15 posts - 1 through 15 (of 25 total)

You must be logged in to reply to this topic. Login to reply