Transation Log

  • Hello All ,

    I am working with 20 tables with Inserting and Deleting database .....

    Size almost 25-30 GB.

    Now I have check size of database ...so It's go to 30 gb to 60 gb.

    In these....Dataflies size 29 GB and Logfile Size 31 GB.

    How can I decrease that log file size... becuase there will be no future use of that files.

    Is these good idea to remove log files...????

    I am using simple recovery model..

    Thanks

    L

  • The log file must be large enough to fully contain the largest transactions between checkpoints. Once you system has reached its normal throughput you can shrink the transaction log and remove unused space.

    If it grows again back to the larger size then just live with it - that is the size it needs to be to handle the ebb and flow of your transaction load.

    If you are concerned about disk space you will need to look at limiting the size of your transactions or breaking them up into smaller batches in between checkpoints.

    The probability of survival is inversely proportional to the angle of arrival.

  • Thanks for replying me,,

    Rightnow I am using my database only for reporting purpose..

    and Rightnow nobody using these database...

    How can i shrink the Database?

    Is there any effect to my Database?

    Thanks

    L

  • I thought you were referring to the transaction log. You can compact the database file also if it is larger than you need to hold the data currently in there.

    Look at: DBCC SHRINKFILE and DBCC SHRINKDATABASE in BOL

    The probability of survival is inversely proportional to the angle of arrival.

  • you can use this dbcc command DBCC SHRINKFILE(<TransactionLogName>, 1)

    or

    you can do the same with management studio.

    http://msdn.microsoft.com/en-us/library/ms190757.aspx

    Make sure the log files are not active while you perform shrink..

  • sturner (8/29/2011)


    I thought you were referring to the transaction log. You can compact the database file also if it is larger than you need to hold the data currently in there.

    Look at: DBCC SHRINKFILE and DBCC SHRINKDATABASE in BOL

    Are you referring the shrink on data files here? What about the chance of fragmentation here and blotting the size in larger way?

  • patla4u (8/29/2011)


    Hello All ,

    I am working with 20 tables with Inserting and Deleting database .....

    Size almost 25-30 GB.

    Now I have check size of database ...so It's go to 30 gb to 60 gb.

    In these....Dataflies size 29 GB and Logfile Size 31 GB.

    How can I decrease that log file size... becuase there will be no future use of that files.

    Is these good idea to remove log files...????

    I am using simple recovery model..

    Thanks

    L

    When you say that the log file has no future use, you need to remember that it is like a rough notepad for sql server where it takes notes of each operation it performs and if it does not perform this activity it cannot rollback an operation due to human or mechanical interruptions like(power failure or killing a transaction). It need log files to maintain consistency and ACID properties.

    You say that your databases is in simple recovery model. First question here is if this db is a production database. If the answer is yes, then you are not taking log backups so you can potentially loose the data between two full backups when a situation demands for a restore.

    You are using some fully logged commands so it leads to increase in your log file size. If the simple recovery model is not able to keep the size intact, use transaction log backups by keeping the db in full recovery model and then decide if you want to keep the transaction log backups as per your retention policy. If you are happy with just a full back, then you can set the retention policy to delete transaction log backups every hour.

    I hope this helps you!

  • Thanks For your Reply 🙂

    This is production database.but I am deleting previous month records and inserting recent month record and use a copy of recent month record and move to another(copy or same) database.

    So this is only one time process to delete and insert rercords.

    Also i am talking two backup...prvious month back up then once the data load then recent month backup.

    Please let me know, which back up is suitable for this database.

    Thanks

    L

  • For ur problem I can give you following solutions:

    Recovery Model: Simple

    1. Try to run the inserts/deletes on the tables in a batches of 1000/10000 (which ever is applicable in ur case) and make sure that u commit soon after that batch. As soon as SQL Server sees commit, it TRUNCATE the t-log, and frees up the t-log space for reuse.

    2. Use SSIS IMPORT/EXPORT solution to push data into SQL Server, please make sure that u enable the option COMMIT in batches of option

    3. While performing BULK INSERTS/UPDATES.DELETES change the recoveru model to BULKLOGGED and change it back to simple after completing your task.

    Regarding backups:

    Before u start operation take a FULL backup, after ur INSERTS/DELETES u can opt to take the differential backup, which contain changes made to DB after full backup.

    SQLforU
    info@sqlforu.com
    For online training on SQL Server and Sybase, please contact
    contact@sqlforu.com
    www.sqlforu.com

  • SQLforU (9/2/2011)


    1. Try to run the inserts/deletes on the tables in a batches of 1000/10000 (which ever is applicable in ur case) and make sure that u commit soon after that batch. As soon as SQL Server sees commit, it TRUNCATE the t-log, and frees up the t-log space for reuse.

    Nope. Truncate happens on checkpoint, not commit. See yesterday's headline article

    3. While performing BULK INSERTS/UPDATES.DELETES change the recoveru model to BULKLOGGED and change it back to simple after completing your task.

    Why would you want to force log retention and the need for log backups just for the time you're doing bulk-loads? That's make the log grow more, not less (again, see yesterday's headline article)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply