Shrink Database

  • simon-hulse (12/29/2015)


    hmm..

    I have just run a shrink file and a shrink file with reorganise on a 30Gb database. The DB has lots of indexes. Just checking a couple... (both Clustered and Non-Clustered) - Fragmentation on all indexes didn't get over 5% on any index. typically it was around 1%.

    It will depend on how organised the data is in your DB before the shrink...

    I'm not sure that "checking a couple" will do. I'd run a full up sys.dm_db_index_physical_stats and make sure. If you can say the same after that, then I'd have to say that you got really, really lucky with everything that needed to be shrunk being at the end of the file... unless you didn't actually use the reorganize option of the shrink.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • this is why I don't post on forums... Im an experienced DBA of 20 years working for some blue-chip companies along the way.

    apologies for not being clear... I "eyeballed" some stats. I have written a small TSQL to get me what I need.

    SELECT

    database_id,

    ips.object_id,

    OBJECT_NAME(ips.OBJECT_ID) as object_name,

    OBJECT_SCHEMA_NAME(ips.object_id) as schema_name,

    ips.index_id,

    i.name as IndexName,

    ips.index_type_desc,

    sum(fragment_count) as fragment_count,

    avg(avg_fragmentation_in_percent) as avg_fragmentation_in_percent,

    sum(page_count) as page_count,

    sum(row_count) as row_count,

    p.data_compression,

    STATS_DATE(ips.object_id,ips.index_id) as lastupdated

    FROM

    sys.dm_db_index_physical_stats(DB_ID() , NULL, NULL, NULL , NULL) ips

    left outer join sys.dm_db_partition_stats ps on

    ips.object_id = ps.object_id and

    ips.index_id = ps.index_id

    left outer join sys.indexes i on

    ips.object_id = i.object_id and

    ips.index_id = i.index_id

    left outer join sys.partitions p on

    i.object_id = p.object_id and

    i.index_id = p.index_id

    where

    alloc_unit_type_desc='IN_ROW_DATA' and

    ips.index_id <> 0

    group by database_id,

    ips.object_id,

    OBJECT_NAME(ips.OBJECT_ID),

    OBJECT_SCHEMA_NAME(ips.object_id),

    ips.index_id,

    i.name,

    ips.index_type_desc,

    p.data_compression,

    STATS_DATE(ips.object_id,ips.index_id)

    Poor guy only asked if to shrink his DB as his logging was aggressive and he wanted to tidy it up. It just amuses me that DBA's dive on "Don't Shrink" when actually the answer is "it depends"...

  • Before I posted, I googled around and found a good many articles/posts that basically said to 'Never, never, never, never' shrink the file because of what it does to the indexes. However, none of them seemed to mention that you could always rebuild the indexes (manually or via automation). (I'm assuming that that is a possibility if the decision is made to shrink the file.)

  • The option is there because sometimes you do need to shrink a DB. Like in your circumstances where you are deleting 6 months (I think it was) of logs that you don't need.

    You just shouldn't shrink for normal day-to-day operations.

    do a shrink and a rebuild of index's - then you will be onto the next topic of "don't rebuild indexes - it grows your database"

  • simon-hulse (12/29/2015)


    The option is there because sometimes you do need to shrink a DB. Like in your circumstances where you are deleting 6 months (I think it was) of logs that you don't need.

    You just shouldn't shrink for normal day-to-day operations.

    do a shrink and a rebuild of index's - then you will be onto the next topic of "don't rebuild indexes - it grows your database"

    There are ways to rebuild most indexes without growing the database. Clustered indexes are the exception if you need the tables to remain online during the rebuild.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • david.holley (12/29/2015)


    Before I posted, I googled around and found a good many articles/posts that basically said to 'Never, never, never, never' shrink the file because of what it does to the indexes. However, none of them seemed to mention that you could always rebuild the indexes (manually or via automation). (I'm assuming that that is a possibility if the decision is made to shrink the file.)

    Like I said, that's either because they forgot to mention it or they don't actually know what they're talking about.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden (12/29/2015)


    they don't actually know what they're talking about.

    Mostly this one, I suspect.

    And yes, you can rebuild indexes afterwards. Doing so will usually grow the data file and the log file and if you only had a little bit of free space before the shrink may well result in the DB being as large as it was before. Hence why the rule is usually don't shrink. The only time I'll recommend shrinking is after a massive archive or deletion of data when you know that the space won't be needed for DB growth for a good period of time (~6 months).

    Even then the question is why shrink. A file with free space doesn't slow SQL down. It doesn't cause larger backups. It just takes more space on disk. If that's not a problem for you, then there's no harm in leaving the DB as-is.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • simon-hulse (12/29/2015)


    It just amuses me that DBA's dive on "Don't Shrink" when actually the answer is "it depends"...

    Well, you have the normally-correct answer. "It depends" comes up a lot because it's usually true. Now, please don't let this discourage you from posting. I find that on this forum, the discussion is more valuable than the bottom-line answer.

    Most times, I'll say "don't shrink" because after the rebuild, you can end up with the file being smaller, but not always. In my case, I don't have the option of online rebuilding because I'm on standard edition, so it requires a blackout window. That being said, I will be doing an archive of several years worth of data soon where I will be doing a shrink afterwards. Now that we finally have an agreement on an archiving schedule, it the file shouldn't require as much space as it has now.

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply