Dropping and restoring DB to fix file fragmentation

  • Does anyone know if dropping a DB and then restoring from a backup will resolve file fragmentation? We have a couple 400GB DBs on the same disk that are heavily fragmented. There's enough space on the drive to hold each file again so there's enough free space to do a normal defrag but the duration the DB will be out of commission is of concern. The service account does have the perform volume maintenance tasks so the backup can take advantage of that.

    Since I'm sure someone will bring up autogrowth settings as the cause of fragmentation, they're currently set to 500MB and that was a change that happened around 5 or 6 months ago.

  • That should do it, since it'll initiate a new file. Depending on free-space extents on the drive after dropping, you might need to clear the drive off and the start restoring.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • There isn't much on the drive outside of DB files so we should be good with that but it is something to look out for since the files we're putting are almost a third of the disk each. Thanks.

  • Let us know how this works. I think it should work fine for you on the file fragmentation of the MDF. It shouldn't change any internal fragmentation of tables inside the MDF

  • Unless reindex hasn't been doing it's job I'm not worried about internal fragmentation. I'll update this post with how it goes (or if we decide to do a straight defrag due to the expected restore time.) Since this DB is in use almost 24/7 between loads, users, automated reports, and normal maintenance scheduling the downtime is going to be fun.

  • We've done a similar thing from time to time. We would detach the database, move the files to a clean volume and then run a defrag utility on the disk. Once done, move the files back and reattach.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • cfradenburg (12/2/2011)


    Unless reindex hasn't been doing it's job I'm not worried about internal fragmentation. I'll update this post with how it goes (or if we decide to do a straight defrag due to the expected restore time.) Since this DB is in use almost 24/7 between loads, users, automated reports, and normal maintenance scheduling the downtime is going to be fun.

    If the DB is really critical and has to be up 24/7 I would be surprised if you say you don’t have any High Availability options enabled (Mirroring, Log Shipping and Replication etc.)

    Please failover to Secondary database (server) and defrag the disk. Failover back to Primary Server & remove internal fragmentation as well.

  • Dev (12/2/2011)


    cfradenburg (12/2/2011)


    Unless reindex hasn't been doing it's job I'm not worried about internal fragmentation. I'll update this post with how it goes (or if we decide to do a straight defrag due to the expected restore time.) Since this DB is in use almost 24/7 between loads, users, automated reports, and normal maintenance scheduling the downtime is going to be fun.

    If the DB is really critical and has to be up 24/7 I would be surprised if you say you don’t have any High Availability options enabled (Mirroring, Log Shipping and Replication etc.)

    Please failover to Secondary database (server) and defrag the disk. Failover back to Primary Server & remove internal fragmentation as well.

    That might be a better plan, if failback doesn't require a re-synch that would refragment or replace the mdf file anyway.

    If, for example, failback requires restoring from a backup of the failover database, then defragging before failback isn't really needed.

    If, on the other hand, the main means of failover is moving to a secondary cluster node attached to the same SAN, then failover/failback won't help in this situation.

    Or if failover-failback is more expensive in terms of labor and downtime than simply taking the server offline for a few minutes while running a drop and restore, then it won't be as good an option.

    But do look into various options on this.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Dev (12/2/2011)


    If the DB is really critical and has to be up 24/7 I would be surprised if you say you don’t have any High Availability options enabled (Mirroring, Log Shipping and Replication etc.)

    Please failover to Secondary database (server) and defrag the disk. Failover back to Primary Server & remove internal fragmentation as well.

    It's clustered so failing over won't buy us anything. As to whether or not that's the best HA for this environment, it was well established when I got here so not something I had the privilege to be a part of deciding. Realistically, it shouldn't be 24x7. It's just a very important reporting server that has major performance issues that make it so that doing the things that are required take too long. Unfortunately, until the second DBA starts we won't have time to do hit anything but fruit that falls off the tree. The defrag was prompted by a checkdb failure due to insufficient system resources (http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx)

  • cfradenburg (12/2/2011)


    Dev (12/2/2011)


    If the DB is really critical and has to be up 24/7 I would be surprised if you say you don’t have any High Availability options enabled (Mirroring, Log Shipping and Replication etc.)

    Please failover to Secondary database (server) and defrag the disk. Failover back to Primary Server & remove internal fragmentation as well.

    It's clustered so failing over won't buy us anything. As to whether or not that's the best HA for this environment, it was well established when I got here so not something I had the privilege to be a part of deciding. Realistically, it shouldn't be 24x7. It's just a very important reporting server that has major performance issues that make it so that doing the things that are required take too long. Unfortunately, until the second DBA starts we won't have time to do hit anything but fruit that falls off the tree. The defrag was prompted by a checkdb failure due to insufficient system resources (http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx)

    I will keep a note in my collection. One more reason for not opting SQL Server Clustering... 😉

  • Dev (12/2/2011)


    I will keep a note in my collection. One more reason for not opting SQL Server Clustering... 😉

    There are some very good reasons to use a cluster. The biggest being the name for the instance doesn't change and you don't need to duplicate storage if that's an issue. With servers with a lot of space on them that can be a deciding factor. However, not duplicating storage also has it's drawbacks.

  • cfradenburg (12/2/2011)


    Dev (12/2/2011)


    I will keep a note in my collection. One more reason for not opting SQL Server Clustering... 😉

    There are some very good reasons to use a cluster. The biggest being the name for the instance doesn't change and you don't need to duplicate storage if that's an issue. With servers with a lot of space on them that can be a deciding factor. However, not duplicating storage also has it's drawbacks.

    Not just a drawback, I would say a major drawback. We are talking about High Availability of data (database) where data itself is not duplicated. Actually, it’s High Availability of Instance only.

  • Dev (12/2/2011)


    cfradenburg (12/2/2011)


    Dev (12/2/2011)


    I will keep a note in my collection. One more reason for not opting SQL Server Clustering... 😉

    There are some very good reasons to use a cluster. The biggest being the name for the instance doesn't change and you don't need to duplicate storage if that's an issue. With servers with a lot of space on them that can be a deciding factor. However, not duplicating storage also has it's drawbacks.

    Not just a drawback, I would say a major drawback. We are talking about High Availability of data (database) where data itself is not duplicated. Actually, it’s High Availability of Instance only.

    As a primary HA option, clustering the server and keeping the data on a high-reliability SAN can be a very good option. You want an off-site solution as well, if your data is worth that or your uptime is worth that (often is), but switching to an offsite solution just because you need to patch Windows on the server is major overkill. Trivial to do on a cluster, though.

    Keep in mind that any SAN that's been competently set up does offer data redundancy, just like any other RAID-5/6/10/01/1 solution. There are other benefits to a properly set up and administered SAN, but that's one of them. Failure of a drive or even a whole I/O channel won't lose data, for one thing.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • You can reduce your backup time by using one of the third-party tools that compresses backups.

    Redgate's SQL Backup

    Quest's Litespeed

    Idera's SQL Safe

    All of the above are good options, and each one comes with a fully functional trial edition. I can backup a 1.5TB database in just about 2 hours using Litespeed, and my 300GB database gets backed up in less than 30 minutes.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Ugh! Be careful folks. On a "crowded" hard disk, dropping and restoring a DB may not be as effective for defragging as you think. There may be little contiguous room on the hard drive.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 1 through 15 (of 22 total)

You must be logged in to reply to this topic. Login to reply