How to copy a database with one big mdf file to another server with smaller multiple drives.

  • We have a server that has a database with a single File group (Primary) and has one mdf and one ldf file that reside on 2 drives that are 1 TB and 300 GB in size. The mdf file is currently about 800 GB in size and 98% of that is full.
    We need to take a backup of this database and restore to another server. The target server has multiple drives (4) but each of them is only 500 GB each.
    Is there a way to restore the database with 800 GB mdf file to the target server with smaller drives? Don't see a way to run the restore script with MOVE from one mdf to go to an mdf and ldf on the target?
    Other than adding more disk space to the drives on the target server, is there a way we can do this restore?

    Thanks,
    BVip.


  • Yep. The only option is to expand disk drives on the destination server. Also If you have a drive which is big enough on the destination server and you can move database files on another drive from that drive on the destination server you can use this method.

  • Thanks for the reply. Unfortunately, we don't have any large drive on the target server.

    Don't know if the fact that this database just has one table that has these 15 to 20 MB records, gives us another way to move the table/database somehow without using backup/restore.
    Although all the space is obviously being taken up by a single table and that is what has made the mdf file at source to almost 800GB.

    Wonder if we could create two smaller databases on the target and move ( data transfer) records from the source table to 2 target tables in 2 databases.and then maybe create a brand new database on target with files on 2 drives. Finally move records from the two tables to the one final one on target,  may work? Although need to see if we have enough disk space on target for conducting such an operation.

  • I've not had to work at the OS level in about a decade because I've worked for companies that have a separate group to do such things but take a look at the following Google search for possibilities.  In theory (I say that because I've not personally done it), you can make the 4 drives work as a single volume/drive.

    https://www.google.com/search?q=combine+multiple+disks+into+one&oq=combine+multiple+disks

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Bharatvip - Thursday, January 25, 2018 5:53 PM

    Thanks for the reply. Unfortunately, we don't have any large drive on the target server.

    Don't know if the fact that this database just has one table that has these 15 to 20 MB records, gives us another way to move the table/database somehow without using backup/restore.
    Although all the space is obviously being taken up by a single table and that is what has made the mdf file at source to almost 800GB.

    Wonder if we could create two smaller databases on the target and move ( data transfer) records from the source table to 2 target tables in 2 databases.and then maybe create a brand new database on target with files on 2 drives. Finally move records from the two tables to the one final one on target,  may work? Although need to see if we have enough disk space on target for conducting such an operation.

    There are some other options:
    1. You can create a new filegroup(s) and migrate indexes in that filegroup(s) in order to divide your data among different datafiles. Then shrink the primary.
    2. Similar as 1 but partition your biggest table on the new filegroup(s).
    3. Create a new filegroup move all the user tables there temporary. Then shrink primary and add several new files to the primary. Then migrate all user data back to primary where it will be evenly distributed among all the files. Finally you can remove the new filegroup which you temporarily used.

  • This requires data movement. No way to restore and make smaller files. You could combine disks into a volume, essentially a RAID 0 approach, which is fine if this is a dev server.

    You can ETL transfer data from one instance to another. You'll log all these operations, which might be a log space issue on the new server. You'd have to guess there, or run them in simple mode as you ETL.

    You can also resize on the main instance, making filegroups/files that are smaller and moving the tables among them to balance the space among filegroups and files. Then you can restore with different paths on the secondary server.

  • You could take 3 of your 500 GB drives and create RAID-5 configuration for the data drive. Use the remaining drive for the log file. The RAID would give you around 930 GB of usable space.  This might not be enough for future growth through considering your database is already at 800 GB. Depending on the read/write operations of your database RAID-5 might not be the best option but I just thought I would throw this option out there.

  • Apparently, the OP has left the building.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • If you have enough total space - create multiple files in the primary file group on the destination server that has enough space allocated for all the data you want to move.  Then create an SSIS package to extract the data from the source system and load the data to the test system (one single table in that database).

    Make sure you set the batch/commit sizes on the OLEDB Destination in SSIS to a reasonable size (I start with 200000).  This will issue a commit every 200,000 rows keeping the transaction log from growing out of control.  You can increase this size - but make sure you don't go too large - as that could cause the commit to take too long.

    Jeffrey Williams
    Problems are opportunities brilliantly disguised as insurmountable obstacles.

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply