General question about transaction log backups.

  • Backing up the transaction log does not shrink the file. Now that you have a regular schedule in place, you can perform a one time shrink down to an acceptable size. I would recommend shrinking the log file to no smaller than 10 or 20 percent the size of the database.

    To shrink the file issue the command:

    DBCC SHRINKFILE(logical_name_of_log_file, 100);

    To get the logical name of the log file:

    Use {your db};

    Select * From sys.sysfiles;

    The above will shrink the file down to 100MB. You want the log to be as large as it needs - so you probably want to monitor the sizes of the transaction log backups over the next week/month to find out the largest size. Then size it a bit larger (I just round up to the nearest 100/500/1000MB size).

    Or, you could shrink it down to 100MB - and let it grow to the size it needs to be. Review the size and grow it as needed.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Thanks Jeffrey, good to know that's the correct way of doings things.

    In terms of the database files themselves now, what's the best settings here given I'm now doing transaction log backups?

  • For those settings, I use a fixed size and not a percentage. What I try to do is pick a size that will allow for several weeks/months of activity. If I don't know what that is - then I'll use a large enough value that makes sense.

    For example, if the data file is 1000MB and I don't know how much the database is being used, I would set it to 100MB and monitor. You can use the disk usage report to see how much space is actually available in the data file and try to keep at least 20% free. Monitor for several months to get a general idea of growth and grow the data file to accomodate that growth. Then, schedule a regular review to validate your assumptions on how much growth there really is.

    The one thing to keep in mind is that you don't want to hit an autogrowth event - you just want to have it set in case you miss the above review. And, you want it set large enough so that you are growing the data file a lot.

    For the transaction log - the same prinicipal applies, except you are not going to monitor for general growth but for max size. The transaction log will grow out to a maximum size that is needed to support the work and should stabilize. Use an autogrowth setting that is large enough to handle at least a full days transactions - but again, monitor the usage. Once you know the max size - grow the log file a bit larger (round up to nearest 100/500/1000MB) and leave it alone.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Sorry for the late response Jefferey.

    Thanks for that, learnt a lot in this thread. Thanks again.

  • You are welcome - and thanks for the feedback. It is appreciated.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams (3/26/2009)


    The one thing to keep in mind is that you don't want to hit an autogrowth event - you just want to have it set in case you miss the above review. And, you want it set large enough so that you are growing the data file a lot.

    Why don't you want to hit an autogrowth event?

  • ppcx (4/6/2009)


    Jeffrey Williams (3/26/2009)


    The one thing to keep in mind is that you don't want to hit an autogrowth event - you just want to have it set in case you miss the above review. And, you want it set large enough so that you are growing the data file a lot.

    Why don't you want to hit an autogrowth event?

    Because growing a data file is an expensive operation and can cause your application to experience performance issues while waiting for the file to grow.

    This can be somewhat mitigated on SQL Server 2005 and Windows Server 2003 (and greater) using instant file initialization. But, the operation is still expensive and uncontrolled growing of the data file can cause additional issues.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • You say the disk backups are copied to tape at 11PM every night.

    Are you 100% certain this happens every night, and 100% certain it always happens at 11PM.

    Or do you mean the process of copying disk backups starts at 11PM every night and takes as long as it takes. And tape being tape maybe one day in 10 the offload fails and nobody thinks to tell the DBA this has happened. And does the tape get sent offsite at 11PM, or maybe it goes sometime the next day.

    Have you ever tried getting a backup back from tape. Many sites have a process where someone is tasked to get backup files for a randomly chosen day and confirm that everything expected has been restored from tape to disk. The first few times this is done the results are often embarrassing, but then people start getting the tape backup to be reliable.

    A good rule is there should always be two routes to a recovery. If you need to restore to a given day and the tape gets stretched when it is read, it is good to have kept your disk backups for 2 days so you have another tape to try. Alternatively, each night two backup tapes should be made, which should be stored in different places and sent offsite at different times.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Back in here with a problem :(.

    A database log file has grown to 26GB, and although the transaction log backups are running every 15 minutes, and differential backups every 3 hours, it isn't reducing in size.

    So each differential backup is becoming 24GB every 3 hrs and the server is running out of disk-space fast.

    I've tried running the command you told me about earlier:

    DBCC SHRINKFILE(Database_Log, 100);

    but it still stays at 26GB.

    Any idea what the problem could be here, and what it could be caused by? How can I get a reduction in the file?

  • Any ideas anyone?

  • this site tells you what transactions are not allowing log shrinkage.

    http://www.tek-tips.com/faqs.cfm?fid=345%5B/url%5D

    This site tells you on how to forcefully shrink a log file

    [url]http://www.broad-lea.com/sql_server/sql_reduce_log_size.html"> http://www.tek-tips.com/faqs.cfm?fid=345%5B/url%5D

    This site tells you on how to forcefully shrink a log file

    http://www.broad-lea.com/sql_server/sql_reduce_log_size.html



    Pradeep Singh

  • Thanks for that. I've attached what I see through the command you gave me. I see a heap of Status=2 rows.

    The thing is, there's no users using the database right now, so what could possibly be "active" and using all this space? It doesn't make any sense to me at all.

  • What's the value of log_reuse_wait_descr in sys.databases? That'll tell you what's preventing log truncation (ie keeping the log records active)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gail, thanks for responding.

    The value is 6 for log_reuse_wait. What does that mean?

  • JamesNZ (6/20/2009)


    The value is 6 for log_reuse_wait. What does that mean?

    Check the column log_reuse_wait_descr. It give the text description. log_reuse_wait just gives the numeric code.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 16 through 30 (of 66 total)

You must be logged in to reply to this topic. Login to reply