Index fragmentation.

  • Hi All,

    I have a table "A" which has regular activity and this involves insertions of millions of records every day. We did not defrag or rebuild the indexes since 6 months.

    And now, the indexes(5 indexes on this table) are fragmented to almost 90%. It has very large amount of data. I need to archive the data and just put latest 3 month of data in that table.

    I need a suggestion .... whether to rebuild the index and then archive .... or vice versa.

    Which would perform better or do you have any other best practices....?

    note:but archiving is taking too long & blocking other queries...

  • What comes to mind is that the re-index should occur AFTER.

    What it our archive process?

    I was thinking copy and then delete but due to index fragmentation (in particular) that you need to do the delete in smaller batches than the copy. This should limit blocking and locking and give you better control over your transaction log.

    I would probably use SSIS for the copy even if it was on the same server. You'll probably get better insert performance than an INSERT-SELECT by using Fast-Load.

    CEWII

  • I am just trying to copy the data and delete it for 5 day periods.

    But, one of my friend suggested 'Sliding Window Partition'.

    But I dont know about it.....and how it suits me...... I am googling it by not getting clear idea.....

    any ideas guys??

  • Not sure if it makes sense or not, perhaps a guru could comment...

    Possibly delete the indexes, do the inserts, then recreate the index.

  • Are you using Standard or Enterprise edition?

    With Enterprise (if you have the space in TempDB) you could rebuild the index WITH (online = on). Having the indexes defraged would improve your delete performance.

    Whatever you do you will need to delete in small amounts, preferably use the clustered index or Primary key (if clustered). I'd find the primary key value related to the last row you want to archive/delete (i.e. a single lookup) and then use this in the delete control, rather than use the date column. IF THIS IS POSSIBLE.

    Since the data being archived can probably be extracted with a NOLOCK clause the extract itself will at worst be slow, but no worse than reading through the table.

    For the delete, you can write code to loop ,

    BEGIN TRAN

    delete a small number of rows

    COMMIT TRAN

    WAITFOR DELAY

    End loop

    This way you don't hold the table to long, but can just let the delete process run. Small is relative and you'll need to test what works,

    Make sure you have regular log backups running!

    Leo

    Is it Friday yet?

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • Why is archiving taking so long? Are you trying to move everything or is it that the queries themselves are taking so long? If the queries are running slow because of the heavily fragmented indexes, then you may want to get the indexes rebuilt first, archive the data, and then rebuild the indexes again.

    Also, you should create a maintenance routine to keep the indexes defragmented over time rather than leave them alone for six months and then have to go through a crash course in database management.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply