Generate Next Key from Stored Procedure

  • Hmm, am a bit puzzled as to why you all wish to use a stored procedure when a simple insert trigger will do the job :

    CREATE TRIGGER TI_zMaster ON zMaster FOR INSERT AS

    BEGIN -- TI_zMaster

    DECLARE

    @RegNumberint

    --

    SELECT@RegNumber = ISNULL( MAX( RegistrationKey ), 0 ) + 1

    FROMzMaster

    --

    -- Update key

    --

    UPDATEzMaster

    SETRegistrationKey = @RegNumber

    FROMINSERTED

    WHERERegistrationKey = INSERTED.RegistrationKey

    --

    END -- TI_zMaster

    GO

    Diwakar

    --

    Diwakar

    --

  • Wow what a thread this has become.

    Diwakar the point if you look back is to ensure advancing number without gaps.

    Although your code does work for a single insert what happens when multiple inserts occurr at the same exact moment. How does your code uniquely ID the INSERTED.RegistrationKey to ensure it will not step on the other incoming data or preexisting data? I am not saying it won't work but how do you ensure it in your code, how did you test?

  • Antares686,

    As long as you maintain a transaction, this will work very well.

    Since the trigger will increment the value to 1 if there is no record in the table, you will never have a situation where the value will be 0.

    So, whilst testing, pass a value of 0 to RegistrationKey in zMaster.

    Here is the sample test script :

    create table zMaster

    ( RegistrationKey integer

    )

    go

    CREATE TRIGGER TI_zMaster ON zMaster FOR INSERT AS

    BEGIN -- TI_zMaster

    DECLARE @RegNumber int

    --

    SELECT @RegNumber = ISNULL( MAX( RegistrationKey ), 0 ) + 1

    FROM zMaster

    --

    -- Update key

    --

    UPDATE zMaster

    SET zMaster.RegistrationKey = @RegNumber

    FROM INSERTED

    WHERE zMaster.RegistrationKey = INSERTED.RegistrationKey

    --

    END -- TI_zMaster

    GO

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    insert into zMaster values (0)

    go

    --

    select * from zMaster

    go

    --

    --

    Diwakar

    --

  • Understand that but I want to be sure nothing is missed here. When tested, even with transaction control keep in mind your INSERTS are running sequentially not syncronously.

    Try testing this way.

    DECLARE @x int

    SET @x = 0

    WAITFOR TIME 'hh:mm'

    WHILE @x < 10000

    BEGIN

    BEGIN TRAN -- set any transaction controls after here.

    insert into zMaster values (0)

    COMMIT TRAN -- do any releases and cleanup before here.

    SET @x = @x + 1

    END

    put this in several different windows an make sure WAITFOR TIME is the same for all and provides enough time to get them only started.

    I ended up with 5 of which only the first will run and the rest got hit as a deadlock victim. You have got to deal with these so you have to set transaction isolation level and control flow, you want a timeout to occurr as opposed to a deadlock if you are going to have issue, means more transactions are queued where the deadlock in testing was instant. What is your suggestion for this?

    Another issue with a trigger is you are increasing now the number of transaction recorded to the TL and these updates will further slow the processing down. It may work fine in low transaction environments but when transactions increase performance is hurt exponentially. The solution to this is wrap and handle in a Proc limiting overall transactions which also has the effect of limiting transactions in the TL.

  • Agreed that timeouts are better than deadlocks.

    My approach would be to set transaction isolation level to read committed and have a reasonable lock timeout value. Depending on the kind of application, I would adjust the timeoout to between 10 seconds to 40 seconds (debatable).

    Also agreed that Triggers have a lot of overhead, but I would prefer using a trigger if only for maintenance purposes. An update from within a trigger will be as fast as an update from a procedure if you have a proper index defined on the column. In this example, since registrationKey is a primary column, havng a clustered index would speed up the update.

    As for transaction log, whether you use a procedure or a trigger, you are not limiting the transaction entry in the log. The log overhead depends on checkpoints, revovery models, log size, etc.

    --

    Diwakar

    --

  • quote:


    As for transaction log, whether you use a procedure or a trigger, you are not limiting the transaction entry in the log. The log overhead depends on checkpoints, revovery models, log size, etc.


    Actually that is incorrect. You have an entry for the INSERT and one for the UPDATE. Wrapping all the logic in Proc means no update to the data and only the INSERT is recorded. So you actually have two transactions for every one you perform with the trigger as opposed to logic wrap in Proc. Selects are not recorded, but you have to keep in mind the increase number of transactions (the insert and the update) means the log file will grow faster as well.

  • Antares686,

    > Wrapping all the logic in Proc means no

    > update to the data and only the INSERT is

    > recorded.

    Are you sure about that?

    I ran SqlProfiler with a stored proc having just an update statement, and it recorded a write to a transaction log.

    I also ran an update from QueryAnalyser, and saw a write to the log.

    An insert writes both insert and update in the log.

    --

    Diwakar

    --

  • Oops, sorry we are are two different pages here.

    With the SP you do this

    DECLARE @RegNumber int

    --

    SELECT @RegNumber = ISNULL( MAX( RegistrationKey ), 0 ) + 1

    FROM zMaster

    INSERT zMaster VALUES (@RegNumber)

    but the issue is to make sure that two items cannot access and insert into master at the same time. Especially preventing the SELECT MAX value from being selected by oth causing a duplicate value, this way you don't need a control table (I thought this was where this conversation went). Thus you don't even need to perform the update.

    The SP handles the logic with only one INSERT not an INSERT and an UPDATE.

    Oh the confusion that long threads can create.

    Oh, and make sure you remove your trigger before testing this.

    Edited by - antares686 on 02/07/2003 07:18:58 AM

  • Antares686,

    I have no doubt the stored procedure method works fine, and will resolve the seq number generation. I was however arguing for the usage of a trigger to generate the next number as against using a stored procedure/control table.

    My reasoning is : keep object data/methods within the object itself. Since we are generating a seq number for the zMaster table, let the table handle it. Having stored procedures/control tables means extra maintenance and version overheads.

    As for the log entry, does it really matter whether one write or two writes are done from within a transaction? So long as you configure your RAID subsystems and keep your transactions small, log operation overheads can be made minimal.

    Oh, we could argue for days on this issue I suppose!-)

    --

    Diwakar

    --

  • You are right we could argue this till the end of time. My point however was the suggestion (although does work) has some major cosiderations the user needs to know.

    For example his number of transactions may be impacted more so with the trigger solution as opposed to the Proc solution.

    Additional TL space needs may need to be explored and consideration of current drive setup may impact that as well.

    I am not debunking you concept (it is great) but wanted to point out the potentials that could exist.

    But the fact that it does work does not mean it is the best solution and the factors that effect it need to be presented to the user for consideration.

    In fact as another option you could do both to cover the unexpected insert from outside sources, say an admin directly to the table. In the trigger you just have a process to chack the app name and if matches with the app name you submit thru the connection string you don't run the trigger. Otherwise you do.

    Then we have to go into the fact someone could potentially create an app using the application name parameter with the expected name to avoid the trigger and shove in an invalid data value.

    Then we go to the idea that the trigger could check the inserted value if matches the next potential value don't do the update otherwise do it.

    This would cover doing it in the SP (which would be better from a performance standpoint) and the fact someone could key in the proper next value (no real reson to update with itself, and a trigger can see the new incoming data so if they happen to key say 25 but the max value is 10 the next value 25 is replaced with is actually 26, the trigger seems the inserted value as part of the data, so another issue to make sure is properly addressed in all the code).

    Suppose someone forget they need to leave value as 0 to make sure stays sequential.

    Your method has merrits but has issues that must be addressed as does the SP method.

    Just pointing that out before the user stumbles on them afterwards instead of beforehand.

Viewing 10 posts - 31 through 39 (of 39 total)

You must be logged in to reply to this topic. Login to reply