Persistent index fragmentation

  • I’m writing a script that will be ran periodically for general index maintenance. While testing my script, I’ve run across a few problematic indexes. Regardless of what I do, the fragmentation stays the same. I’ve tried rebuilding the index (via a script and SQL Studio), reorganizing, and using DROP/CREATE. I’m a web based programmer, not a DBA, and was floored when I created the index under a new name; the fragmentation level persisted. I’m hoping someone here has had a similar problem and some way to address the issue. Thank you for your assistance.

  • How big is the index in question? (number of pages)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • This tables have a clustered index or is heap tables ?..

    Fragmentation in a clustered table is easy to resolve, just reorganize or rebuild you clustered index but in a heap table is not so easy..you can do this in a heap table

    Create a clustered index or

    Create a new table and insert the rows of the heap table into this table and use some sort order or

    Save the data in another table or in a txt (export the data) , truncate and import the data .

    $hell your Experience !!![/url]

  • Page count: 4

    This is just on my development database. I assumed that if this problem occurred on my dev db, it would occur on production. As for being a heap table, I'm not sure what you mean. The more I learn about SQL, the more I figure out how little I know about SQL. My involvement has generally been simply what it takes to get the job done for web development. Not for lack of effort but a time issue. In an effort to answer your question, here is some of my code.

    *the tblIndex_* table prefix has nothing to do with SQL indexes. It's just apart of the database schema.

    CREATE NONCLUSTERED INDEX [temp] ON tblIndex_Address(strCity) WITH (FILLFACTOR = 90);

    DROP INDEX ix3_tblIndex_Address ON tblIndex_Address;

    EXEC sp_rename 'tblIndex_Address.temp', 'ix3_tblIndex_Address', 'INDEX';

    GO

    ALTER INDEX ix3_tblIndex_Address ON tblIndex_Address REBUILD;

    GO

    DBCC INDEXDEFRAG (dbNT_SPMS_devj, tblIndex_Address, ix3_tblIndex_Address);

    GO

    DECLARE @indexId int;

    SET @indexId = (SELECT index_id FROM sys.indexes WHERE [name] = 'ix3_tblIndex_Address');

    SELECT * FROM sys.dm_db_index_physical_stats (DB_ID(N'dbNT_SPMS_devj'), OBJECT_ID(N'tblIndex_Address'), @indexId, NULL, 'DETAILED')

    After running the above code, the avg_fragmentation_in_percent column remains unaffected in the row where index_value = 0. The above select statement returns two rows. The second row has an index_level value of 1. As this is where my skill set really starts to fade, I believe 0 is the leaf level? In any case, the '1' row has an avg_fragmentation_in_percent of 0. Still, the SQL studio UI shows 75 fragmentation so I assume '0' is the row to use.

  • sqlsc (4/7/2009)


    Page count: 4

    very small table, so the fragmentation exists you cannot defrag it and its waste of resources.

  • So the reason it can't be defragmented is because it's so small? aka, this problem is basically only an issue because it's a development database? The production tables have many, many more records and the index page counts are much higher.

  • Because of the way page allocations are done for very small indexes, they will almost always be fragmented. It's nothing to worry about, fragmentation is only a concern when you're doing scans of larger tables or indexes. By large, I mean with more than around 1000 pages.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • sqlsc (4/7/2009)


    So the reason it can't be defragmented is because it's so small? aka, this problem is basically only an issue because it's a development database? The production tables have many, many more records and the index page counts are much higher.

    Yes, true there's not much we can do if the the tables are small. as a rule of thumb pages >1000 only then fragmentation affects the performance of your server.

    In your production databases in your scripts you should count the number of pages, as per MS you could go as far as 5000 pages only then you will experience performance hit( I haven't tested though), if the number of pages are comparatively less then you dont do anything to those tables if the number of pages are large then start defragging your indexes.

  • A heap table is a table without clustered index.

    This statment shows all heap tables in database

    USE YourDB

    GO

    SELECT DISTINCT

    = OBJECT_NAME(OBJECT_ID)

    FROM SYS.INDEXES

    WHERE INDEX_ID = 0

    AND OBJECTPROPERTY(OBJECT_ID,‘IsUserTable’) = 1

    ORDER BY

    $hell your Experience !!![/url]

  • Thank you for the information. I will add a page_count check to my script so it won't waste resources.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply