Truncate Log Files

  • I am trying to do an update on the table where it has millions of records, when am doing update its taking a long time and the tran log growing bigger which is leading to out of disk space and finally my update gets failure. I cudnt make an update as the log files growing bigger when updating.

    How do i make an update successfully keeping my tran log under control.

  • Sounds like you need to break down your update into smaller batches instead of one big update. Could you post your update query?

    😎

  • 1. Add HDD space.

    2. Turn your database recovery model to Bulk-Logged for the time of your UPDATE operation (if you are using large data types).

    3. Follow Lynn's suggestion

  • To add to Lynn's suggestion, batch your updates and do a TX log backup after each batch, this will free up space in the TX log to be re-used and keep the log size under control.

  • That's what I forgot to put in that post on another thread, a BACKUP LOG!

    Yes, you need to do that as well. As I asked earlier, if you post your code, we can see how it may be rewritten to update in batches.

    😎

  • or .... if you're allowed to do so during your update ... put your db in simple logging ...

    Keep in mind..... backup before, symple recovery, your stuff in batches, full recovery and backup after !!

    also make sure you can recognize the rows you've already handled....

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Cant you change the RECOVERY MODEL TO BULK LOGGED, and then do the updates. If in Full recovery model, no matter how many T_log backups you take it does not shrink the log files to the minimum.

    The procedure would be

    1.Take a Log backup

    2.change recovery model to BULK LOGGED

    3.Do the millions of UPDATES

    4.Take a Log backup once you are done.

    5.Switch to FULL recovery model

    Hope this helps..

    Thanks!!!

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • There are two issues here. The first is that the update is taking a long time, and second as a result the log file is filling up the disk resulting in the update failing.

    If you break up the big update into smaller batches with transaction log backups between updates, you can shorten the time it takes to complete the updates and you keep the transaction log from constantly growing.

    😎

  • bulk recovery only doesn't log bulk operations (see BOL ! )

    So classic update statements are still written in full to the log !!

    If you log did grow to big, you can shrink it afterward.

    Check recovery model in BOL !

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply