Is it possible to send the log file to a NUL device?

  • We are deleting massive amounts of data from SQL Server 2000 DB, we have set the logging mode to simple & force a checkpoint every 5 mins.  It is still taking longer than we had hoped to do this operation.  We have done a full backup before starting the delete & do not care about the logged transactions.  If something fails during the delete we are OK with having to do a full restore.

    On other systems we use we can redirect files to a NULL device.  Is it possible to direct the out put of the logs to the NUL device on drive.  Our idea is to create a memory device & then point the log file to the NUL of this device.

    Any help would be greatly appreciated.

    Thanks

  • Setting the recovery mode to Simple will have no effect on performance since the DELETEs are still fully logged. 

    No, you can't place the log file on a NULL device SQL Server must have a real log file.  Placing that log file on a fast I/O subsystem that is dedicated to it would probably help though.

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

  • OK.  Thanks for the help on the log file device.  I thought that would probably be the case.

    Should we be using the Bulk recovery mode, based on the documentation simple sounded like the best choice for doing the massive amounts of delete?

    It is already on a very fast i/o subsystem.

     

  • Since you are doing DELETEs the Bulk-logged model will not help.  The only operations that benefit from that recovery model are BCP and BULK INSERTs.

    /*****************

    If most people are not willing to see the difficulty, this is mainly because, consciously or unconsciously, they assume that it will be they who will settle these questions for the others, and because they are convinced of their own capacity to do this. -Friedrich August von Hayek

    *****************/

  • If you can get a RAM disk set up, you could put the log file on there.  If you do this, you may need to  reduce the checkpoint interval so the RAM disk does not fill up.

    Be aware that this idea is just a technical solution for your problem as stated.  It would be very unwise to put a log file on a RAM disk in almost every circumstance, as the database would need recovering from the last backup if the system crashed. 

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • For what it's worth...

    Can you "batch" your deletes into more granular segments?  The faster you commit your deletes and run a checkpoint, the smaller you can keep the log.

  • Thanks for the input.  We will try this.  We were hoping to be able to use a ram disk with the NUL Device, so this looks like the best option for our situtation.

  • Just out of curiosity - what percent of the data in the table are you deleting? 

    Another possible solution would be to copy the data you wish to keep into a new table via a select into or bulk copy task or bcp it out to the file system.  Reload the data into a new table (e.g. mytable_new), create your indexes, etc. and then rename the two tables...  set the database to "bulk logged" during the select into/bulk copy/bcp operation for best performance.

    It sounds like you can do pretty much whatever you want with this data & database (e.g. no big deal if it blows up).  If there is nothing else in the database other than the table you wish to delete the data from you might even consider bcp'ing the data out to the file system then dropping and recreating the database (script your table, indexes constraints, etc.) at a smaller size then reload the data and recreate your indexes, etc.  If there is a clustered index on the table make sure that you create that before you start to reload the data - creating a clustered index after the fact is painful.

     

     

     

  • TRUNCATE TABLE

    Removes all rows from a table without logging the individual row deletes.

    Syntax

    TRUNCATE TABLE name

    If row deletes are not logged, then maybe the log issue is not an issue.

    Above works only if you want to get rid of all rows in a table.

    HTHs

    Gary

  • Would it make any sense to create a temp table to insert data to be saved, then truncate original table, then reinsert to original table from temp table or rename temp table, etc. ?

    Gary

     

  • These ideas are great, but because of the 3rd party software we are using we are limited to how we can manipulate the tables & data.  The experts say it has to be done with deletes.

    We are removing data from 1000's of tables.  It is not pretty.  It is part of one of those merger projects...

    I believe I got the answer to my question.

    Thanks for all the help!!

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply