What could be the reason of database logfile growth ?

  • Dear Experts,

    We have configured logshipping for a database. Which is of 78GB in size. And on that DB we have scheduled a

    DBCC Reindex evry night. actually the backup schedule was 2 hrs earlier.

    Now the issue is with DB size when i checked the DB backup size, it was 22Gb in size when we have changed the backup schedule to 12 hrs.

    What could be the reason. will DBCC Reindex increase the size of the DB Logfile.

    Thanks,

    CH&HU.

  • Yes, its an expensive operation. Change the recovery model to Bulk logged when you are rebuilding your indexes which will be logged minimally.

  • Thanks for reply,

    But this operation is scheduled every night at 12am. How can i change that when no resource is available. And the reindexing is not allowed to run in the morning hours as its fact.

    Can you please help me, how can i work on that to make that to set the recovery model for BULK Logged before indexing and again back to FULL recovery model.... Any way to change that before the scheduled run...

    Help me out...

  • Hiya,

    You can schedule a job to change the recovery model to Bulk before the Rebuilding and after it's completion you can change it back to the Full recovery model. You got it?

    Or

    you can also think of rebuilding your indexes in chunks, few tables at a time.

  • Hi,

    Keep in mind that switching to BULK LOGGED Recovery Model will minimally log changes to the Transaction Log file however, your transaction log backups will still contain full details of the operations i.e. they will not reduce in size.

    Take a look at the following books online reference for details on how to switch from FULL recovery model to BULK LOGGED. Pay particular attention to the recommendations regarding backups of the transaction log both before and after you make the switches.

    http://msdn.microsoft.com/en-us/library/ms190203.aspx

    You should probably also consider implementing a custom script for managing your index optimisation, as you are most likely re-indexing/re-building indexes that you do not need to, each night.

    Take a look at the following excellent custom index optimisation script.

    Index Optimisation Script[/url]

    Cheers,

  • CH&HU (4/7/2009)


    Dear Experts,

    We have configured logshipping for a database. Which is of 78GB in size. And on that DB we have scheduled a

    DBCC Reindex evry night. actually the backup schedule was 2 hrs earlier.

    Schedule DBCC Reindex every weekend or when ever there isn't much of an activity.

    What is the necessity for you to reindexing every day.

    Do you check fragmentation before you reindex?

    To have little control over transaction log file growth when reindexing happens, always do reindexing with in a transaction and then commit.

    Now the issue is with DB size when i checked the DB backup size, it was 22Gb in size when we have changed the backup schedule to 12 hrs.

    What could be the reason. will DBCC Reindex increase the size of the DB Logfile.

    DBCC Reindex is an logged operation will increase the size of the database log file when reindexing happens.

    What is the schedule of transaction log backups that happens?

    Thanks,

    CH&HU.[/quote]

  • Backup schedule is at every 12hrs , so it will work at 12am hopefully.

    For querying fast we are using the dbreindex every night as to reduce appilcation errors.

  • CH&HU (4/7/2009)


    Backup schedule is at every 12hrs , so it will work at 12am hopefully.

    I really dint get you when yous say backup schedule is at every 12 hrs? Could you please explain this to me?

    For querying fast we are using the dbreindex every night as to reduce appilcation errors.

    More insight into this please? Are you checking your fragmentation levels before reindexing? and also are you not using ALTER INDEX...REORGANISE statement to defrag your tables?

  • CH&HU (4/7/2009)


    Backup schedule is at every 12hrs , so it will work at 12am hopefully.

    I am assuming this is the Full backup schedule. What about the transaction log backups?

    For querying fast we are using the dbreindex every night as to reduce appilcation errors.

    With that do you mean you are trying the avoid the app errors with data from the table being unavailable on account of reindexing, then that is a good thinking. But as already mentioned above do you check fragmentation before you do the reindexing?

    Thanks!!

    The_SQL_DBA
    MCTS

    "Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."

  • Thanks for your replies.

    Sorry for the late response and improper tooo....

    1. Actually i was talking about the LOG Shipping backup schedule. We have scheduled LSBACKUP for every 12hrs due to our criteria.

    2. I have not checked the fragmentation levels. Can anyone please suggest how to proceed ?

    3. Because of application errors, we are using Indexing each and every table every night. I dont know why they are planned for REIndexing everyday. It was just came to know to me a couple of days back. As am new to these servers.

    Will this Indexing reduce errors and deadlock issues anywhere / anytime, can anyone please explain this ?

    Thanks,

    CH&HU.

  • CH&HU (4/7/2009)


    2. I have not checked the fragmentation levels. Can anyone please suggest how to proceed ?

    You should look at avg_fragmentation_in_percent in sys.dm_db_index_physical _stats dmv, as a rule of thumb 30% you should rebuild, but it depends on your business model and how much can they accept?

    Check out for some scripts on the forum you should be able to find some.

    3. Because of application errors, we are using Indexing each and every table every night. I dont know why they are planned for REIndexing everyday. It was just came to know to me a couple of days back. As am new to these servers.

    Probably its time for you to check out the scripts or documentation that you have

    Will this Indexing reduce errors and deadlock issues anywhere / anytime, can anyone please explain this ?

    Not really but indexes are one of the major issues of server performance. Yes, it is a complimentary to your search queries but if you dont maintain them then they are a huge hit on your server.

    What kind of server is it? how busy is our server, I mean what are the usual operations that are running?

    What is your index strategy?

  • use the alter index command, this is also minimally logged under bulk-logged and i think simple recovery too

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • What kind of server is it? how busy is our server, I mean what are the usual operations that are running?

    What is your index strategy?

    ---------------------------------------------------------------------------------------------

    We are on SQL server 2005 / windows 2003 server with 64 Bit. Operations in the sense all will be running. Server will be busy from 14am - 10 pm IST every day.

    ReIndexing all the tables...

    We are facing here DEADLOCK issues - can you please suggest how to over come from these issues.

  • How did you confirm that you have dead lock issues? Did you turn on trace flag 1222? Are your front end users experiencing time out errors?

  • We have an audit table which stores the details of the errors on application. As you said the users are facing TimeOut errors and save profile errors, update errors....

    There in that audit table i can see nearly 40 - 70 dead locks and 50+ timeout errors daily. And the total error count goes to 500+ in a day when i check EOD.

    Please suggest me...

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply