DBCC CHECKDB - how use it in a maintenance plan ?

  • 2.Rebuild Index [Expect System Database]

    3.Shrink Database [Expect System Database]

    That's a combination I generally refer to as a waste of time. The shrink will cause massive fragmentation of your indexes, easily up to 90%. You're spending time, CPU and IOs to put the indexes in order (possibly growing the database) and then you're spending more time, cpu and IOs shrinking the file and shuffling the indexes. Next time any data is added, SQL will spend even more time, CPU and IOs growing the file so it has space to work.

    Also, repeated shrinks and grows can easily cause fragmentation at a file system level, requiring a disk defragment to fix

    See - http://sqlinthewild.co.za/index.php/2007/09/08/shrinking-databases/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Jef and Gila,

    Now I understand why I shouldn't shrink the Data file, I will take 'Shrink Database' Off from Maintenance Plan. All information you guys posted are very useful. Thanks.

    I got another query, on my production server, all the transaction log files are very big, about 45GB but initial size is 4GB. My question is, shall I shrink the transaction log ONLY so that Log not gow too much. Shall I do that every night?

    I never backup the transaction log even database model is -FULL. transaction log backup is not required for my environment at the moment.

    if you suggest to do that, 'Before' or 'After' my Maintenance Plan?

    *****

    BACKUP LOG t_log_name WITH TRUNCATE_ONLY

    DBCC SHRINKFILE (t_log_name,amount,retain )

    *****

    Sat

    <<=== Here ?

    1.Check DB Integrity [All Database]

    2.Rebuild Index [Expect System Database]

    3.Backup Database [All Database]

    4.Clean Up Maintenance Plan

    5.Clean Up History

    <<== Here?

    Thanks

  • Leo (10/18/2008)


    I never backup the transaction log even database model is -FULL. transaction log backup is not required for my environment at the moment.

    You need transaction log backups in full recovery because if you don't, your transaction log will grow without bound, as you're seeing.

    If you don't need point-in-time recovery, then switch the database to simple and you won't have to worry about the transaction logs. If you do need point-in-time recovery, then set up regular log backups.

    See - http://sqlinthewild.co.za/index.php/2008/07/23/recovery-model-and-transaction-logs/

    if you suggest to do that, 'Before' or 'After' my Maintenance Plan?

    *****

    BACKUP LOG t_log_name WITH TRUNCATE_ONLY

    DBCC SHRINKFILE (t_log_name,amount,retain )

    *****

    Neither. Switch to simple recovery if you have no need of log backups. Then you won't have to worry about doing that at all.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gila,

    Sorry, I get back to you a bit late. I am not fully understanding about Transaction Log. Let say I switched my DB Model to 'SIMPLE'. I don't need to worry about transaction log anymore as you said. But what do I do if transaction log is getting bigger, for example -

    Transaction log initial size 4GB [that is okay and I am happy with this size], During the day or when I check the database at night, Transaction Log is about 40GB . What do I do?

    I know, you will hate me but I really want to understand this which mean I want to learn from your experience. Last question.......

    Insert or Reading the record from Database is very slow sometime, is that anything to do with transaction log? How to solve it?

    Thanks again.

  • Leo (10/19/2008)


    Hi Gila,

    Sorry, I get back to you a bit late. I am not fully understanding about Transaction Log. Let say I switched my DB Model to 'SIMPLE'. I don't need to worry about transaction log anymore as you said. But what do I do if transaction log is getting bigger, for example -

    Transaction log initial size 4GB [that is okay and I am happy with this size], During the day or when I check the database at night, Transaction Log is about 40GB . What do I do?

    I know, you will hate me but I really want to understand this which mean I want to learn from your experience. Last question.......

    Insert or Reading the record from Database is very slow sometime, is that anything to do with transaction log? How to solve it?

    Thanks again.

    On your first question, if your database is in SIMPLE recovery model you don't have to worry about BACKING UP the transaction log. The transaction log is still required and will grow to the size needed to manage the largest transaction that will process on your system.

    If your transaction log is growing to 40GB (in SIMPLE model), then your transaction log needs to be 40GB to handle the workload. If that is not acceptable, then you need to identify the process(es) that are using that much space in the transaction log and fix those processes.

    One example would be an ETL process that is loading millions of rows of data in a single transaction. A process like this could easily cause the transaction log to grow that large.

    Second question - I would not suspect the transaction log has anything to do with your performance issues. The only way the transaction log would really affect your performance is if the file is on the same disks as the data files and you are seeing I/O contention.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Leo (10/19/2008)


    But what do I do if transaction log is getting bigger, for example -

    Transaction log initial size 4GB [that is okay and I am happy with this size], During the day or when I check the database at night, Transaction Log is about 40GB . What do I do?

    In simple recovery it shouldn't grow that much. It's growing in full recovery because it has to retain all of the log records until the log is backed up or truncated. In simple, it just has to retain the log records until an automatic checkpoint occurs.

    If the log grows (and it may if there are long running transactions) then, as Jeffrey said, it needs to be that size.

    People tend to panic when the log gets large, but it's not necessary to worry about a large log file if the size is constant. The last place I worked had a 220GB log file for the main DB. Most of the time it was nearly empty, but we could use up to 75% easily during the overnight processing.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gila and Jeff.....

    Leo

  • your re-index script will also cause yout tlogs to grow.

  • Leo (10/16/2008)


    Hi Gila,

    According from 'dbcc checkdb (dbname) with no_infomsgs.' , you said run

    that command before backup run. But my questions is, what do we do if I already setup a job like

    21:00 - dbcc checkdb (dbname) with no_infomsgs

    22:00 - DBCC BackupDB(DBname)

    23:00 - DBCC RebuildIndex

    1. How do we know database is corrupt before the Backup at 22:00? Job is already setup and backup the corrupt database anyway according from the JOB. Any way to stop that, NOT to backup?

    2. Which command can fix and repair the Database if 'dbcc checkdb' result is failed.

    3. Do you have any sample code to check database, run the db repair if needed then backup and rebuild DB?

    Thanks.

    You don't and as far as I am aware, the backup will happen disregard if any error/corruption were found in the database. Not sure if there is an easy way to catch this (others may have ideas on this) In a wider context, I do not see if there is any need to do so. If setup properly, I would recommend the maintenance task results (Success or Fail) to be confirmed by email to the Administrator. Failing that, a matter of screening through the error logs may be sufficient but it can be a real pain if you have dozens of database to administer.

    As far as the fix and repair command goes, here is the SQL Server 2005 Books Online recommendation:

    "Use the REPAIR options only as a last resort. To repair errors, we recommend restoring from a backup. Repair operations do not consider any of the constraints that may exist on or between tables. If the specified table is involved in one or more constraints, we recommend running DBCC CHECKCONSTRAINTS after a repair operation. If you must use REPAIR, run DBCC CHECKDB to find the repair level to use. If you are going to use the REPAIR_ALLOW_DATA_LOSS level, we recommend that you back up the database before you run DBCC CHECKDB."

    Note: The command here is referring to DBCC CHECKTABLE with REPAIR_ALLOW_DATA_LOSS level

  • herbiechin (11/28/2008)


    You don't and as far as I am aware, the backup will happen disregard if any error/corruption were found in the database. Not sure if there is an easy way to catch this (others may have ideas on this)

    In 2005 and higher, if the backup is done with the checksum option then SQL will recalculate the page checksums (providing page checksum is turned on, which it should be) as it takes the backup. If any page fails a checksum check, the backup is aborted.

    "Use the REPAIR options only as a last resort."

    Absolutely!

    I'd go as far as saying if you're not 100% sure what to do when you get corruption, ask someone who does know. Maybe ask here, maybe ask on another forum, maybe mail a more-experienced DBA, maybe call CSS

    Better yet, research now when you don't have corruption and practice various recovery options so that if you do get it, you know exactly what to do.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I have a question about corruption. How often does it really happen? The previous DBA said it never happen to us in 7 years that she was here and we have a big database, not as big like a bank but it's big, we store information on raw metals from many warehouses and all orders of them, and much more.

  • Oksana March (11/29/2008)


    I have a question about corruption. How often does it really happen? The previous DBA said it never happen to us in 7 years that she was here and we have a big database, not as big like a bank but it's big, we store information on raw metals from many warehouses and all orders of them, and much more.

    It depends on your hardware. Corruption's typically a hardware problem, so if you have a rock-solid IO subsystem, you shouldn't see much, if any corruption. If, however, the disks are near failure, the SAN controller's buggy or there's a misbehaving filter driver (like anti-virus) you may see a great deal of corruption.

    For what it's worth, I saw corruption twice in the four years I worked at the bank. Once was a corrupted log record which required a full restore of the DB. The other time wasn't my system (fortunately). For reasons I won't go into, there was no offsite backup. The san controller glitched and spewed garbage across two LUNs, corrupting the only backup and the file header of one of the files in the primary file group.

    The database was a complete and total loss. They never got it back.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gail,

    "Stop SQL" you mean stop SQL Server Agent in the Control Panel--Services? And I just want to confirm: if I create a database to test this *corrupting on purpose*, this will not touch any other database on the same server? I will try it 🙂

    Then when I accomplish corrupting it, what is the best exercises to try? Thanks!

    GilaMonster (10/16/2008)


    Do you know how I can go about deliberately corrupting a database. I'd like to test some different things, but I'm not sure how to go about it.

    Sure. First though, a disclaimer for anyone else who wants to try this. Do not do this to a DB you care about. Do not do this to a database that you're not willing to discard. Do not ever consider doing it to a production database.

    Stop SQL. Open the mdf file in a hex editor. Go at least 800kb into the database (to avoid corrupting the system tables) and then randomly change values. Save the file and exit, then restart SQL

    Or, if you're feeling adventurous, you can use SQL Server as a hex editor and get it to corrupt one of its own databases.

    http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/03/10/how-to-create-a-corrupt-database-using-bulk-insert-update-and-bcp-sql-server-as-a-hex-editor.aspx

  • Thank you for answering this one, by the way!

    GilaMonster (11/29/2008)


    Oksana March (11/29/2008)


    I have a question about corruption. How often does it really happen? The previous DBA said it never happen to us in 7 years that she was here and we have a big database, not as big like a bank but it's big, we store information on raw metals from many warehouses and all orders of them, and much more.

    It depends on your hardware. Corruption's typically a hardware problem, so if you have a rock-solid IO subsystem, you shouldn't see much, if any corruption. If, however, the disks are near failure, the SAN controller's buggy or there's a misbehaving filter driver (like anti-virus) you may see a great deal of corruption.

    For what it's worth, I saw corruption twice in the four years I worked at the bank. Once was a corrupted log record which required a full restore of the DB. The other time wasn't my system (fortunately). For reasons I won't go into, there was no offsite backup. The san controller glitched and spewed garbage across two LUNs, corrupting the only backup and the file header of one of the files in the primary file group.

    The database was a complete and total loss. They never got it back.

  • Oksana March (11/29/2008)


    Hi Gail,

    "Stop SQL" you mean stop SQL Server Agent in the Control Panel--Services?

    No. I mean stop SQL server. The SQL service must be stopped before you can access the data files.

    And I just want to confirm: if I create a database to test this *corrupting on purpose*, this will not touch any other database on the same server? I will try it 🙂

    It won't. However... Do not do this on a production server. Do not do this anywhere except on a local server on your local machine.

    This is not something you should be playing with on a SQL instance that's of any importance to anyone but you

    Then when I accomplish corrupting it, what is the best exercises to try? Thanks!

    See if you can recover it...

    See these posts for some things to do and try and techniques to recover:

    http://www.sqlskills.com/BLOGS/PAUL/category/Corruption.aspx

    Again, just for emphasise (and for anyone else reading), do not do this on any server of any importance. Preferably install SQL on your local machine (express, or developer edition if you have it) and play there

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 15 posts - 16 through 30 (of 33 total)

You must be logged in to reply to this topic. Login to reply