Switching Recovery Models when reindexing

  • I am wondering if anyone has any real world experience of switching recovery models for reindexing the database, specifically when running the optimisations job of a maintenance plan.

    The database i am now looking after is currently in simple mode and i want to change it to Full mode for point in time recovery and obvisouly setting up a job to backup the transaction logs on a periodic basis, maybe every half hour.

    I have done this before but disk space is abit of an issue here so this time i want to try and reduce the amount the log grows by so was thinking of changing the mode of the database to Bulk Logged prior to the optimisations starting and then changing back to Full after they have completed. This i believe would also reduce the size of the transaction log bacukp after reindexing, is this true?

    I am using SQL Server 2000 and do no have any backup compression tools to help me out as yet, this is on my wish for list!

    Any advice on this would be kindly appreciated.......thanks.

  • Eddy...

    Assuming the Database is in full recovery and T-log backup job running every 30 min.. below steps can be followed while re-indexing..

    1. Take a full backup.

    2. Disable T-log backup job.

    3. Change the database to simple mode.

    4. Run the re-indexing jobs.

    5. Change the database to full recovery.

    6. Take a full backup again.

    7. Enable T-log backup job.

    Hope this would be helpful.

    Shree

  • Shree_h (4/30/2010)


    Eddy...

    Assuming the Database is in full recovery and T-log backup job running every 30 min.. below steps can be followed while re-indexing..

    1. Take a full backup.

    2. Disable T-log backup job.

    3. Change the database to simple mode.

    4. Run the re-indexing jobs.

    5. Change the database to full recovery.

    6. Take a full backup again.

    7. Enable T-log backup job.

    Hope this would be helpful.

    Shree

    Please do not follow this advice - this will break the log chain and prevent you from restoring to a point in time past the change to simple recovery model. If you ever have a situation where the current backup is corrupted, you want to be able to restore from the previous backup and apply all of the transaction logs up to the point in time of the issue - this will prevent that from happening.

    Eddy, I setup what you are talking about on several systems. There are a few issues you need to understand about switching to bulk-logged and back to full before you implement.

    1) You will not be able to restore to a point in time when the database was in the bulk-logged recovery model. In other words, you cannot use the STOP AT feature to stop at any time after the database was switched to bulk-logged or before the switch back to full.

    2) The transaction log backups will not be affected - they will still backup all of the changes, even though the reindexing will be minimally logged.

    3) Only REBUILD operations are minimally logged - REORGANIZE is always a fully logged operation.

    This will help keep the transaction log smaller - but that is all it will do. It will not reduce the size of your transaction log backups. If you have the disk space available, I would just let the log file grow as large as is needed to support the rebuilds. I would also implement a smart indexing scheme that only reorganizes/rebuilds indexes that are fragmented. Google/Bing SQL Fool and check out her procedure for this.

    And finally, you could also increase the frequency of your log backups during the reindexing job. That would allow the VLF's to cycle through and be reused during the process.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Not completely true regarding point in time recovery when using BULK_LOGGED recovery model. You can't use a t-log backup for point in time recovery where a minimally logged operation was performed if that time is after the minimially logged operation. What i would do is run a t-log backup immediately after the minimally logged operation.

    If there is no minimally logged operation backed up during a t-log backup, it is possible to use it for point in time recovery.

  • Lynn Pettis (4/30/2010)


    Not completely true regarding point in time recovery when using BULK_LOGGED recovery model. You can't use a t-log backup for point in time recovery where a minimally logged operation was performed if that time is after the minimially logged operation. What i would do is run a t-log backup immediately after the minimally logged operation.

    If there is no minimally logged operation backed up during a t-log backup, it is possible to use it for point in time recovery.

    Lynn - thanks for the additional information. I would not want to bet the farm on being able to perform point in time recovery during that time window. Yes, it is possible that there were no minimally logged operations - but you can't be sure until you try to restore.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams-493691 (4/30/2010)


    Lynn Pettis (4/30/2010)


    Not completely true regarding point in time recovery when using BULK_LOGGED recovery model. You can't use a t-log backup for point in time recovery where a minimally logged operation was performed if that time is after the minimially logged operation. What i would do is run a t-log backup immediately after the minimally logged operation.

    If there is no minimally logged operation backed up during a t-log backup, it is possible to use it for point in time recovery.

    Lynn - thanks for the additional information. I would not want to bet the farm on being able to perform point in time recovery during that time window. Yes, it is possible that there were no minimally logged operations - but you can't be sure until you try to restore.

    That I will agree with, seeing as one of my databases here WAS using BULK_LOGGED recovery model. I changed that to FULL recovery model.

    A good plan would be to switch to BULK_LOGGED from FULL when running a minimally logged operation, switch back to FULL when done and run a t-log backup immediately. You may even want to run a t-log backup just before the minimally logged operation as well depending on how long it may run. Something else I changed was to stop rebuilding every index on every table every night on the same database I changed the recovery model. that process ran over an hour. Good candidate for a t-log backup before and after, would not be able to do a point in time recovery during that particular time frame ony then.

  • Thanks for all replys everyone, i am going to test switching to Bulk model and running the reindexing this week.

    The steps i am thinking of taking now are:

    1) Ensure full backup complete

    2) Run T Log backup

    3) Switch db to Bulk-Logged model

    4) Reindex database

    5) Run T Log backup

    6) Switch db to Full model

    Longer term I will have a job that only indexes tables that are fragmented but at the moment that is further down the list 🙁

  • We have a lot of servers with this issue and we did try the bulk logged model. It saves you the pain of doing a full backup again but it does not guarantee your log will not grow. It worked for a couple of databases for us and did not on some larger ones. The log grew just like it did for full and we went back to the simple model for reindexing again. We do a full backup just after the recovery model is set back to full and resume tlogs later. Paul Randal did a post on why bulk logged may not always control log growth , I think he said it is because of the extents affected and it always has to log that.

    We have Michelle Ufford's selective reindexing on some of our servers and we really like it. The issue with that (again this is specific to our environment may not apply to all) - we have a very strong window to stick to, sometimes the system finds big tables it needs to go and goes outside the window which we cannot afford, so we have this on some servers and on others we do it the traditional way.

  • Eddy, it's well worth a little time to set up selective reorganization for indexes. I use that for several large SAP databases. To address the issue of a too-large run, I cap the number of pages that will be reorganized. If an index is over the cap, I can set up a special run for just that index, but skip it on the normal runs.

    Beth Richards
    Sybase, Oracle and MSSQL DBA

  • Thanks to both dma-669038 and bethrich for your comments as well.

    I really want to stay away from switching back to simple model due to not wanting to break the tran log chain and the full backup taking so long, so i'll see how my tests go this week with switching to bulk model and how big the tran log backups are.

    I would also like to go ahead with testing the selective reindexing as soon as i can but Michelle Ufford's scripts are for SQL 2005 onwards, the instance i'm concerned about at the moment is SQL 2000. I have seen some scripts in the past for 2000 but do you have any recommendations as a starter that i can build on for my environment?

    thanks again in advance

  • Eddy! (5/5/2010)


    Thanks to both dma-669038 and bethrich for your comments as well.

    I really want to stay away from switching back to simple model due to not wanting to break the tran log chain and the full backup taking so long, so i'll see how my tests go this week with switching to bulk model and how big the tran log backups are.

    I would also like to go ahead with testing the selective reindexing as soon as i can but Michelle Ufford's scripts are for SQL 2005 onwards, the instance i'm concerned about at the moment is SQL 2000. I have seen some scripts in the past for 2000 but do you have any recommendations as a starter that i can build on for my environment?

    thanks again in advance

    Look on Books online at the DBCC REINDEX topic - I think there is an example there that is designed to work with 2000.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Eddy, I'm not sure where I got this reorganization code. This one doesn't cap pages, but works for SQL 2000. I think it came from the SAP world. The actual defrag is commented out in this version.

    USE QAS

    go

    -- Declare variables

    SET NOCOUNT ON

    DECLARE @tablename VARCHAR (128)

    DECLARE @execstr VARCHAR (255)

    DECLARE @objectid INT

    DECLARE @indexid INT

    DECLARE @frag DECIMAL

    DECLARE @maxfrag DECIMAL

    DECLARE @SAPdbName VARCHAR(6)

    DECLARE @SAPuserName VARCHAR(6)

    Select @SAPdbName = 'QAS'

    Select @SAPuserName = 'qas'

    -- Decide on the maximum fragmentation to allow

    SELECT @maxfrag = 30.0

    -- Declare cursor

    DECLARE tables CURSOR FOR

    SELECT @SAPuserName + '.' + TABLE_NAME

    FROM INFORMATION_SCHEMA.TABLES

    WHERE TABLE_TYPE = 'BASE TABLE'

    and TABLE_NAME not like '/%'

    and TABLE_NAME <> 'dtproperties'

    -- Create the table

    CREATE TABLE #fraglist (

    ObjectName CHAR (255),

    ObjectId INT,

    IndexName CHAR (255),

    IndexId INT,

    Lvl INT,

    CountPages INT,

    CountRows INT,

    MinRecSize INT,

    MaxRecSize INT,

    AvgRecSize INT,

    ForRecCount INT,

    Extents INT,

    ExtentSwitches INT,

    AvgFreeBytes INT,

    AvgPageDensity INT,

    ScanDensity DECIMAL,

    BestCount INT,

    ActualCount INT,

    LogicalFrag DECIMAL,

    ExtentFrag DECIMAL)

    -- Open the cursor

    OPEN tables

    -- Loop through all the tables in the database

    FETCH NEXT

    FROM tables

    INTO @tablename

    WHILE @@FETCH_STATUS = 0

    BEGIN

    -- Do the showcontig of all indexes of the table

    INSERT INTO #fraglist

    EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')

    WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')

    FETCH NEXT

    FROM tables

    INTO @tablename

    END

    -- Close and deallocate the cursor

    CLOSE tables

    DEALLOCATE tables

    Select 'Total Tables', count(*)

    from INFORMATION_SCHEMA.TABLES

    WHERE TABLE_TYPE = 'BASE TABLE'

    and TABLE_NAME <> 'dtproperties'

    Select 'Fragmented Indexes', count(*)

    FROM #fraglist

    WHERE LogicalFrag >= @maxfrag

    AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0

    Select * from #fraglist

    /*

    -- Declare cursor for list of indexes to be defragged

    DECLARE indexes CURSOR FOR

    SELECT ObjectName, ObjectId, IndexId, LogicalFrag

    FROM #fraglist

    WHERE LogicalFrag >= @maxfrag

    AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0

    -- Open the cursor

    OPEN indexes

    -- loop through the indexes

    FETCH NEXT

    FROM indexes

    INTO @tablename, @objectid, @indexid, @frag

    WHILE @@FETCH_STATUS = 0

    BEGIN

    PRINT 'Executing DBCC INDEXDEFRAG (0, + ' + RTRIM(@tablename) + ',

    ' + RTRIM(@indexid) + ') - fragmentation currently '

    + RTRIM(CONVERT(varchar(15),@frag)) + '%'

    SELECT @execstr = 'DBCC INDEXDEFRAG (0, ' + RTRIM(@objectid) + ',

    ' + RTRIM(@indexid) + ')'

    EXEC (@execstr)

    FETCH NEXT

    FROM indexes

    INTO @tablename, @objectid, @indexid, @frag

    END

    -- Close and deallocate the cursor

    CLOSE indexes

    DEALLOCATE indexes

    */

    -- Delete the temporary table

    DROP TABLE #fraglist

    Beth Richards
    Sybase, Oracle and MSSQL DBA

  • Beth, thanks for taking the time out to post this script.

    I shall look at adapting it for my environment.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply