Deleting a large amount of data from a table

  • Oliii, the bcp process finally completed and here was the "running results"

    Current server - bcp for 20,547,681 records, 812.57 mins @422 rows/sec.

    I tested it on the new server I'm eventually moving everyting to (including the /b flag set at 25,000)

    bcp for 20,547,681 records, 20.17 mins @15,794 rows/sec.

    What a difference...

    Recreating the clustered then nci's now...

    Thanks for all your help!

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • For the tablock hint, you can use the -h option in bcp.exe, so you get something like this:

    bcp "dw_load.dbo.DW_Fleet_Load_Progress" in f:\DW_Fleet_Load_Progress.dat -T -c -q /b 25000 -h "TABLOCK"

    If rebuilding the clustered index takes too long, you can try leaving it.

    During a bcp in, it only build the nci at the end of the batch, so if you do not set a batch size it rebuilds evertyhing in the end, which can take a long time and be very anoying if you run out of space and it rollback for hours.

    If you set a batch size, it only sort data by small batches and you can avoid running into problem in the end.

    Having text/ntext fields in the table slows down the import/export a lot, you might wanna try to give a shot at varchar(max)/nvarchar(max). You just need to change the data type before importing your data.

    In my experience changing a single text field to a varchar(max) increased the import speed 3x (narrow table with 5 billion records). Most of the data inside the text fields was under 8k though, that's probably why it improved the speed so much.

Viewing 2 posts - 16 through 16 (of 16 total)

You must be logged in to reply to this topic. Login to reply