Database indexes

  • Hey DBA's,

    This weekend I performed a much needed Archive and Purge on one of our largest databases.  Once completed I scheduled a job to this weekly.

    My question for you is: Would you reccommend rebuilding the indexes on the tables every week after I do the archive and purge?  Only reason I ask is that after I ran the archive script the first time in production, it seemed to slow things down until I rebuilt the indexes on some of the tables.  Is this rebuilding the indexes on all the tables a good habit to get into?

    Another question, after the I did the archive and purge, the database that I was working on seemed larger than expected.  After I delete all the rows that I insert into my archive database, is there something else that I need to do?  Does the database still keep records of the rows deleted?

    Thanks for all the help

    Shane

  • I'm am not an expert in this, but I would expect that deleting a lot of data from your tables could cause excessive internal fragmentation, which is when space is available within our index pages (the indexes are not making the most efficient use of space).  The end result is that your indexes are bigger than they should be, and this can affect performance.

    I can't comment on when to defrag or rebuild your indexes, but you could run a DBCC SHOWCONTIG on all your indexes, which will show you which indexes are fragmented.

    Unless you have the 'autoshrink' database option switched on, try shrinking your databases using EM.  This will free up any unused space.  Note that the database will not be shrunk smaller that the original size of the databse.  You can also use DBCC SHRINKDATABASE.


    When in doubt - test, test, test!

    Wayne

  • Thanks for the quick response.  My next question is how to rebuild the indexes.  Is there a system stored proc?  I apologize for my ignorance.  I'm new at this.

    Thanks for all the help.

    Shane

  • You can use the maintenance planner in EM to rebuild your indexes on weekly basis or whatever is required.

    Remember rebuilding the indexes will lock them preventing access to the so you may want to only defrag them if other users require access to the tables while you are doing this.

    Here's a couple of scripts i use.

    /* Used to create DBCC showcontig statements for all clustered indexes in the current database

    which can be used to check the fragmentation of the indexes and decide whether they need to be

    defragmented or re-built. Saves you having to remeber the names for all the indexes. */

    SELECT 'dbcc showcontig (' +

    CONVERT(varchar(20),i.id) + ',' + -- table id

    CONVERT(varchar(20),i.indid) + ') -- ' + -- index id

    object_name(i.id) + '.' + -- table name

    i.name -- index name

    from sysobjects o

    inner join sysindexes i

    on (o.id = i.id)

    where o.type = 'U'

    and i.indid < 2

    and

    i.id = object_id(o.name)

    ORDER BY

    object_name(i.id), i.indid

    DBCC SHOWCONTIG WITH FAST, TABLERESULTS, ALL_INDEXES

    START OF SECOND SCRIPT WHICH WILL DEFRAG ALL YOUR INDEXES

    /*Perform a 'USE ' to select the database in which to run the script.*/

    -- Declare variables

    SET NOCOUNT ON

    DECLARE @tablename VARCHAR (128)

    DECLARE @execstr VARCHAR (255)

    DECLARE @objectid INT

    DECLARE @indexid INT

    DECLARE @frag DECIMAL

    DECLARE @maxfrag DECIMAL

    -- Decide on the maximum fragmentation to allow

    SELECT @maxfrag = 5.0

    -- Declare cursor

    DECLARE tables CURSOR FOR

    SELECT TABLE_NAME

    FROM INFORMATION_SCHEMA.TABLES

    WHERE TABLE_TYPE = 'BASE TABLE'

    -- Create the table

    CREATE TABLE #fraglist (

    ObjectName CHAR (255),

    ObjectId INT,

    IndexName CHAR (255),

    IndexId INT,

    Lvl INT,

    CountPages INT,

    CountRows INT,

    MinRecSize INT,

    MaxRecSize INT,

    AvgRecSize INT,

    ForRecCount INT,

    Extents INT,

    ExtentSwitches INT,

    AvgFreeBytes INT,

    AvgPageDensity INT,

    ScanDensity DECIMAL,

    BestCount INT,

    ActualCount INT,

    LogicalFrag DECIMAL,

    ExtentFrag DECIMAL)

    -- Open the cursor

    OPEN tables

    -- Loop through all the tables in the database

    FETCH NEXT

    FROM tables

    INTO @tablename

    WHILE @@FETCH_STATUS = 0

    BEGIN

    -- Do the showcontig of all indexes of the table

    INSERT INTO #fraglist

    EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''')

    WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS')

    FETCH NEXT

    FROM tables

    INTO @tablename

    END

    -- Close and deallocate the cursor

    CLOSE tables

    DEALLOCATE tables

    -- Declare cursor for list of indexes to be defragged

    DECLARE indexes CURSOR FOR

    SELECT ObjectName, ObjectId, IndexId, LogicalFrag

    FROM #fraglist

    WHERE LogicalFrag >= @maxfrag

    AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0

    -- Open the cursor

    OPEN indexes

    -- loop through the indexes

    FETCH NEXT

    FROM indexes

    INTO @tablename, @objectid, @indexid, @frag

    WHILE @@FETCH_STATUS = 0

    BEGIN

    PRINT 'Executing DBCC INDEXDEFRAG (0, ' + RTRIM(@tablename) + ',

    ' + RTRIM(@indexid) + ') - fragmentation currently '

    + RTRIM(CONVERT(varchar(15),@frag)) + '%'

    SELECT @execstr = 'DBCC INDEXDEFRAG (0, ' + RTRIM(@objectid) + ',

    ' + RTRIM(@indexid) + ')'

    EXEC (@execstr)

    FETCH NEXT

    FROM indexes

    INTO @tablename, @objectid, @indexid, @frag

    END

    -- Close and deallocate the cursor

    CLOSE indexes

    DEALLOCATE indexes

    -- Delete the temporary table

    DROP TABLE #fraglist

    GO

  • I have a utility stored procedure but I tend to use DBCC DBREINDEX rather than the defrag.

    The procedure also updates the statistics for all tables.

    If you exec usp_UpdateStatistics then it does all tables.

    If you exec usp_UpdateStatistics 'mytable' then it simply does that table.

    I am fortunate that I can run this overnight without too much impact on my systems.

    CREATE PROC usp_UpdateStatistics @TableName NVARCHAR(50)='' AS

    SET NOCOUNT ON

     DECLARE @NextTable NVARCHAR(50)

     SELECT @NextTable=@TableName

     IF @TableName =''

         BEGIN

      WHILE @NextTable IS NOT NULL

           BEGIN

        SELECT @NextTable = MIN ( Name )

        FROM dbo.SysObjects

        WHERE Type='U' AND

         Name > @NextTable

        print @nexttable

        IF @NextTable IS NOT NULL

         BEGIN

          EXEC ('UPDATE STATISTICS ' + @NextTable)

          EXEC ('DBCC DBREINDEX('''+@NextTable + ''')')

         END

       END

         END

     ELSE

         IF EXISTS ( SELECT Name FROM SysObjects WHERE Name = @NextTable AND Type = 'U')

      BEGIN

       EXEC ('UPDATE STATISTICS ' + @NextTable)

       EXEC ('DBCC DBREINDEX('''+@NextTable + ''')')

      END

    GO

  • Useful scripts above!!!

    DBCC INDEXDEFRAG does not lock the indexes, but it does take a lot of CPU and potentially a lot of IO (depending on the side of your indexes)

    DBCC REINDEX locks the index.

    Which one is faster depends on the amount of defragmentation.  If fragmentation is high, rebuild will be faster than defrag, but you'll have to do rebuild when your users aren't online.


    When in doubt - test, test, test!

    Wayne

  • Wow!!

    Thanks for all the help guys.  My next question is: Which way is better?  A rebuild or a defrag?  Do they both accomplish the same thing?  Should I use both?  Or do I need to decide which is best for our databases?

    Once again, thanks for all the help

    Shane

  • It depends...

    Index defrag and rebuild will have the same result.  Which one is faster depends on how fragmented the index is.

    Reindex rebuilds the entire index from scratch, which can be faster than defrag if the index is heavily fragmented.

    If the index is not very fragmented, then defrag should be faster.

    A suggestion: If you are running the maintenance job when users are not working, then why don't you just use reindex.  If it does not take too long, then stick with it.

    If you have a problem when your users are working, then run defrag.

     


    When in doubt - test, test, test!

    Wayne

  • See the locking comment above.

    If you have some down-time in your schedule then do the rebuild.  If you don't then you are stuck with defrag.

    I would do the update statistics bit regularly too.  Simply comment out the lines in the script that do the rebuilt, or you could add a parameter to the procedure and make the rebuild dependent on that parameter.

     

  • Excellent!

    Thanks to both of you for all your help, as well as the scripts.  It's good to have a place for DBA's to help each other.

    Shane

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply