What isolation level and lock type should I use?

  • Hi,

    I have a C# program that calls a SP (MSSQL 2000) and I want my SP to first SELECT some rows and then UPDATE ONLY those selected rows. As my SP will run for quite a long time, I want my SP to

    • allow other users to insert rows while the SP is running

    but I don't want

    • other users to modify the rows I have selected
    • to update rows that are inserted between my SELECT and UPDATE

    Here is my SP (simplified version)

    SET TRANSACTION ISOLATION LEVEL REPEATABLE READ

    /* Store the required rows in a table variable */

    DECLARE @SelectedID TABLE (ID char (10) NOT NULL PRIMARY KEY)

    INSERT INTO @SelectedID

    SELECT ID FROM

    tableA INNER JOIN tableB ON ... WHERE ....

    /* Select all required rows from @SelectedID and this will be return as a datatable to my C# program */

    SELECT fieldA,... FROM tableA INNER JOIN tableB ON ... where ID IN (SELECT ID FROM @SelectedID)

    /* Update ONLY selected rows */

    UPDATE tableA SET fieldA = 'whatever' WHERE ID IN (SELECT ID FROM @SelectedID)

    COMMIT TRAN

    The problem with this SP is other users cannot insert rows while the SP is running (they need to wait until the SP finishes). What isolation level and lock type should I use?

    Thanks in advance.

    Michael

  • One way to do it is to use cursors,

    eg

    Declare Cur_Var Cursor FOR

    Select Id From tableA INNER JOIN tableB ON ... WHERE .... FOR Update

    OPEN Cur_Var 

    Fetch Next From Cur_Var Into .....@SelectedID

    While @@Fetch_Status = 0

    Begin

     UPDATE tableA SET fieldA = 'whatever' WHERE ID = @SelectedID

    Fetch Next From Cur_Var Into ..... @SelectedID

    End

    Close Cur_Var 

    Deallocate Cur_Var 

  • I suspect that what might be happening is this:

    When you run the update statement SQL Server will take out an exclusive lock on the rows that you are updating.  However, SQL Server might need to escalate those locks to page or table level.  Either way, once the exclusive lock has been escalated to say page level, any attempts to insert data into the same page will be blocked until the lock is released.

    Not quite sure what you best option might be here but I wouldn't use cursors.  If the above sproc is taking a long time to run now then it will certainly take a whole lot longer if you're using cursors. 

    Sorry I don't have a solution for you.

  • try :

    selec * from tableA with HOLDLOCK

    /*any procesess and others users try to access this tableA but they can't.*/

    Update tableA set field='tutu' where id=1

     

    commit tran

    /*now others users try to access this tableA and they can.*/

     

    salute

    Victor

  • You may have to use UPDLOCK for not restricting the other users from reading your data while you complete your update process.

    Please check Locking Hints in SQL Server BOL.

     

    Thanks.

    Prasad Bhogadi
    www.inforaise.com

  • Repeatable Read looks to be the correct isolation level for the situation you described. 

    The use of HOLDLOCK will only make the situation worse as this would raise he isolation level to Serializable and place range locks on the keys thus preventing insert/updates of rows that you have not even read.

    The use of UPDLOCK can help prevent any deadlocks you may be getting possibly from more than one user running the same proc at the same time but once the update process starts exclusive locks are placed on the rows/pages and then no other reads or updates are possible until commit/rollback. 

    The taking of Exlusive Locks is going to be the same for Repeatable Read as UPDLOCK.

     

  • Michael,

    It looks like you are setting the isolation level to Repeatable Read, selecting rows and then returnng to the client.  I'm not sure but this may be leaving shared or Intent-shared locks on pages which might be preventing others from inserting.

    Are users prevented from inserting after the selection process but before the updates start?

    If you can, you should try using some kind of optimistic locking where you read and return the rows to the client without using Repeatable Read.  Then do your updates only to rows where the Timestamp column has not changed (or some other method of detecting rows that have not changed since they were read).

     

  • Thanks for all your replies. UPDLOCK is the key, special thanks to Prasad Bhogadi and ron_k.

    To ron_k,

    >> Are users prevented from inserting after the selection process but before  the updates start?

    No, other users can still insert after the selection process. This is the whole reason to use a table variable to store the rows that I want to update later(instead of using the same "WHERE" clause as for the SELECT).

    Here is the solution:

    /*

    No need to use REPEATABLE READ, use READCOMMITTED + UPDLOCK instead

    SET TRANSACTION ISOLATION LEVEL REPEATABLE READ

    */

    /* Store the required rows in a table variable */

    DECLARE @SelectedID TABLE (ID char (10) NOT NULL PRIMARY KEY)

    INSERT INTO @SelectedID

    SELECT ID FROM

    tableA INNER JOIN tableB ON ... WHERE ....

    /* Select all required rows from @SelectedID and this will be return as a datatable to my C# program */

    SELECT fieldA,... FROM tableA WITH (UPDLOCK ) INNER JOIN tableB WITH (UPDLOCK ) ON ... where ID IN (SELECT ID FROM @SelectedID)

    /* Update ONLY selected rows */

    UPDATE tableA SET fieldA = 'whatever' WHERE ID IN (SELECT ID FROM @SelectedID)

    COMMIT TRAN

  • Michael,

    I'm not sure why UPDLOCK seems to help in this situation but so be it. 

    One potential problem now is that between the time you Insert into @SelectedId and the time you Select from tableA the rows may have been modified and no longer meet the Where condition specified when Inserting into @SelectedId.  Maybe the UPDLOCK can be moved into the Select portion of the Insert into @SelectedId statement?

     

  • I would review using the update lock and make sure that it really does work.  I did a test in Northwind and tried to update multiple rows in Customers (using an update lock) and then tried to do an insert with another process.  The insert just sat there waiting for the first process (the one doing the update) to commit.

    If you think of it, there's no reason why an update lock should fix your problem (it could just be coincidence that it seems to have done so and you may find that with larger or different data sets it won't work).

    The update lock is only issued when data is been read (i.e. Whenever you do an update SQL Server has to read the data that it has to update first).  Then SQL Server tries to update the row(s), at which point it issues an exclusive lock on the resource.  Whenever a data modification is attempted an exclusive lock is issued.  If necessary, SQL Server will escalate that exclusive lock to a page level lock - or even worse, a table lock. 

    At that point you may find that you won't be able to insert any more rows into the table because the page you're trying to insert into is locked.

    I'd just triple check that the update lock is doing what you want it to be doing all of the time.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply