Transaction Log Growth

  • Wondering if anyone out there in can come up with a solution to this.

    I have a table that has 38 columns with a row size of 3939 bytes. There are 118000 records in the table. All the records need to be scanned by a 3rd party application and if the records are marked for delete then delete the records. During these initial stages there is testing taking place on the DEV environment and it requires to pretty much delete about 115000 records. The DB Log was initially sized at 500MB as vendor instructed but this added on process would fail when try to delete the records. Stating that the transaction log was full through trial and error I increased the log size to a suffcient size to allow the delete.

    Question knowing the row size and the no of records is there anyway to get a answer on how much translog is required before start the process?

  • enable the auto growth property of the transaction log, and you should have sufficient space on the drive. hope this will help you.

    Nag
    ------------------------------------------------
    How to get the best help on a forum

    http://www.sqlservercentral.com/articles/Best+Practices/61537

  • Thanks for that I did that to get the correct size of the log ut in general I don't leave the auto grow option on due to other deterimental impacts it can cause. I prefer to try and size the Log to a good size taking account of the possible workload. I was wondering if there was a more scientific method to determine the size a tranlog would require.

  • If it is production environment and if you need disaster recovery it is always better practice to keep auto grow property enabled on log file.

    If it is Dev or test you can keep the database in simple recovery mode and do what every operations you want to and shrink the log file.

    Nag
    ------------------------------------------------
    How to get the best help on a forum

    http://www.sqlservercentral.com/articles/Best+Practices/61537

  • You are better off working out a decent size for your transaction log, AutoGrowth should only really be used as a fail-safe for unexpected log growth.

    The size you set it to depends on a number of factors such as activity and size of the database, I usually start at 20% of the db size and then monitor the growth from there.

    there is a lot of good information about the transaction log here;

    http://sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx%5B/url%5D

  • Imran Ashraf-452633 (6/14/2010)


    I have a table that has 38 columns with a row size of 3939 bytes. There are 118000 records in the table. All the records need to be scanned by a 3rd party application and if the records are marked for delete then delete the records. During these initial stages there is testing taking place on the DEV environment and it requires to pretty much delete about 115000 records. The DB Log was initially sized at 500MB as vendor instructed but this added on process would fail when try to delete the records. Stating that the transaction log was full through trial and error I increased the log size to a suffcient size to allow the delete.

    Question knowing the row size and the no of records is there anyway to get a answer on how much translog is required before start the process?

    Not with any great degree of accuracy, no. The amount of transaction log required depends on many factors, not just the size of the rows. As a very, very rough guess you could multiply the number of rows by the average row size, and then at least double it (logged operations reserve extra space to ensure that a rollback would succeed in almost all circumstances).

    If I had to provide some sort of estimate for the operation you describe, I would take 115,000 rows, and multiply by 3939 bytes to give 432MB. Doubling that gives 864MB, which I would round up to 1GB...but as I say it's all highly dubious really.

    One thing you might consider, when deleting 115,000 of 118,000 rows, is to copy the rows you want to keep somewhere temporarily, truncate the table, and then copy the saved rows back. This technique will not work in all circumstances, but it's something to bear in mind.

    Paul

  • And on top of what Paul says, are there other transactions occurring at the same time? This will further add to the overall size of the log during this operation.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply