better locking model

  • The current locking model in SQL server 2000 is very complex and all sorts of special types of activities have to be done by DBA's and developers to avoid blocking and deadlocking situations. In any enterprise level system, generally, all reads have to be done nolock. Updates need to be done rowlock but this hint is not always taken. I have encounted situations where 2 processes doing inserts to the same table block each other due to lock escalation. Inserts should never block each other.

    The whole complicated system was developed to conserve resources on the server. It's less expensive to maintain a page lock or table lock than row locks. However, in today's systems where cpu, memory and disk are cheap, the cost of the complication of the locking mechanism is not worth it. In Oracle, you almost never have lock problems since they always track to the row level. SQL server should either be upgraded to do row level locking all the time by default or offer 2 versions of the server, 1 for lower power servers where resources are a big issue and another for enterprise level systems where the overhead of dealing with complicated locking is much more expensive than the cost of powerful hardware. The lower power system could retain the current locking scheme and the enterprise version should keep track of row locks and take the burden off of the dba's and developers.

  • Hi there

    haha.. sounds like you wanted to winge to someone but you dont have another dba there to bash heads with

    Yep, sql server is nothing like oracle and its read consistency model, this may change in yukon (maybe!?) but I doubt it. This brings up many interesting problems and forces developers to be very careful about how they build applications, a classic example where things go wrong quickly is when code is directly ported from oracle to sql server, if your not fully away of the locking model of sql server you are really opening yourself up for deadlocks and blocking.

    I have to say though, with the app your talking about, I am not big on hints and never have been. More time should be spent in proper systems design and let the ss2k engine sort out the lover level locking, forcing the optimiser at times can be a really bad thing at so many levels its not worth the trouble.

    As an example, I manage a 2000 user app, with bucket loads of com+ transactions etc etc, and we have 1 deadlock max per month and very rare blocking. No hints, no special tricks, just good development practives, proper testing and keeping transaction ops short and batchs down/overnight.

    Remember, its nothing really to do with cpu, disk or memory, its just want the ss2k team chose. Perhaps in some ways to beat oracle in the performance results?? the work to manage read consistency via rollback segs would require some fantastic piece of coding 🙂

    Cheers

    Ck

    Chris Kempster

    http://www.chriskempster.com

    Author of "SQL Server 2k for the Oracle DBA"


    Chris Kempster
    www.chriskempster.com
    Author of "SQL Server Backup, Recovery & Troubleshooting"
    Author of "SQL Server 2k for the Oracle DBA"

  • CKempste will you supply a bit more info. on your environment, standards, etc.

  • Hi Len

    Me? or the other guy??

    I manage a large web based app, plus some smaller apps..

    2000 users, 300 online approx

    1x 4way webserver, includes com+ business components

    1x 4way dbserver, ss2k ee, + lots of dts jobs

    1x 2way reporting server

    2x domain controllers

    disk array connected to db server

    app runs through a number of "defined" interfaces, such as a data access teir com and another for caching.

    Cheers

    Ck

    Chris Kempster

    http://www.chriskempster.com

    Author of "SQL Server 2k for the Oracle DBA"


    Chris Kempster
    www.chriskempster.com
    Author of "SQL Server Backup, Recovery & Troubleshooting"
    Author of "SQL Server 2k for the Oracle DBA"

  • Thanks for replying.

    Just read the thread again.

    You have said it already, "No hints, no special tricks, just good development practives, proper testing and keeping transaction ops short and batchs down/overnight." app runs through a number of "defined" interfaces

    Sound like a nice environment to work in.

  • Hi Len

    If you want any more details, feel free to email me off line, more than happy to share leasons learnt etc.

    Cheers

    Ck

    Chris Kempster

    http://www.chriskempster.com

    Author of "SQL Server 2k for the Oracle DBA"


    Chris Kempster
    www.chriskempster.com
    Author of "SQL Server Backup, Recovery & Troubleshooting"
    Author of "SQL Server 2k for the Oracle DBA"

  • I have to say though, with the app your talking about, I am not big on hints and never have been.

    dovidf>

    I worked extensively in an Oracle environment prior to the current SQL environment and you couldn't survive without query hints. Any time a table was analyzed, the plan changed and performance went to zero under Oracle.

    *********************************************

    More time should be spent in proper systems design and let the ss2k engine sort out the lover level locking, forcing the optimiser at times can be a really bad thing at so many levels its not worth the trouble.

    dovidf>

    This is a very broad and sweeping statement. We are very careful to use small and short transactions. I've seen in many threads that you basically can't survive without doing dirty reads most of the time. You need to be more specific about what you mean by proper systems design. If you mean not using foreign key relationships and only doing batch jobs single thread and studiously avoiding any concurrent update of a table, this is the kind of overkill which I think would seriously detract from the usefulness of SQL server.

    ***********************************************************

    As an example, I manage a 2000 user app, with bucket loads of com+ transactions etc etc, and we have 1 deadlock max per month and very rare blocking. No hints, no special tricks, just good development practives, proper testing and keeping transaction ops short and batchs down/overnight.

    dovidf>

    This doesn't tell me very much. You may doing some very small unsophisticated lookups so that contention doesn't play a role. How would you handle generating 1,000,000 calling card records in a short period of time for multiple calling card vendors when it can't be pushed off to the evening. This is the situation that I dealt with in my Oracle environment. You don't always have the luxury of avoiding update contention.

    *********************************************************

    Remember, its nothing really to do with cpu, disk or memory, its just want the ss2k team chose. Perhaps in some ways to beat oracle in the performance results?? the work to manage read consistency via rollback segs would require some fantastic piece of coding 🙂

    dovidf>

    I don't agree at all. The highly complex locking mechanism is described in the literature as an attempt to balance limited machine power for throughput with safety. Additionally, only the performance of the locking task is enchanced as any other task is blocked. The coding in maintaining the current locking scheme has got to be much worse than managing read consistency via rollback segments.

    In summary, I don't see where we are violating any good design practices and I need a better specification of the kind of practices that you mean.

    ***************************************

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply