There Must Be 15 Ways To Lose Your Cursors... part 1, Introduction

  • Andy DBA (4/15/2009)


    I apologize if this is somewhat off topic and there's probably tons of articles on it already, but here is a word to the wise on performance testing. I noticed GSquared and others using GetDate() and DateDiff to capture execution times. I call this "wall-clock benchmarking". If you are the only user on your server and/or are taking averages of multiple tests your comparisons may be pretty good, but any other processes running on your server (including the OS!) can throw your results way off :w00t:.

    Yes, this is properly called "quiescence testing" because of the requirement that the environment be quiet during the test. And although it is possible in some test environments to separate the necessary statistics from the ambient and competing activity with sufficient accuracy, but Windows in general and SQLOS in particular are not such an environment. Consequently, quiescence testing is a virtual necessity for us.

    Someone with good expertise on the guts of SQL Server please feel free to jump in here, but I highly recommend querying master.dbo.sysprocesses with the @@spid system variable to get before and after resource usage values and then taking the average of multiple iterations. (see code for one iteration below) Also, don't forget about execution plan caching. Depending on what you're testing, you may want to throw out your first set of results.

    Here's the sql I suggest using to capture cpu usage and i/o. I think BOL explains exactly what these values mean, but for A/B comparisons on the same machine, the raw values are usually good enough.

    Actually I cover this in the next installment, however, let me just point out that sysprocesses is not accurate for this (primarily because of the architectural changes from SQL 2000 to SQL 2005) and also does not have all of the statistics that we really want (cpu and logical reads). SET STATISTICS is what we usually use, but is an ineffective tool for Cursors and WHILE loops. (hint: sys.requests & sys.sessions are where it's at).

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • RBarryYoung (4/15/2009)


    Thomas (4/14/2009)


    Agreed. Here's another example. You want to build a numbers table that simply contains a series of sequential integers. There are many uses for such a table in solving problems such as finding gaps in a series of numbers. How do you populate it without using a cursor/loop? I'm sure we can piss away hours finding a solution however in minutes one could write the following and be done:

    Declare @i int

    Set @i = 0

    While @i < 10000

    Begin

    Insert Numbers(Value) Values(@i)

    Set @i = @i + 1

    End

    I'm all ears if there is a set-based solution (especially a SQL Standard solution) to populating said table. Assume you are starting with an empty database.

    Easy, and I don't bother with anything less than a million rows:

    Select TOP 1000000

    ROW_NUMBER() over(order by c1.object_id) as N

    into Numbers

    From master.sys.columns c1, master.sys.columns c2

    ...

    Oh, and as it turns out I didn't waste ANY hours coming up with this one. In fact it took me less than a minute. That's because I, along with most of the others who use it, didn't come up with it on my own, I just copied it from Jeff Moden in all of about 30 seconds. (thanks, Jeff!)

    Professional expertise is not actually about coming up with ingenious solutions to problems, rather it is really about learning the best solutions known in the professional community and then applying them appropiately. Done well, this may make you look ingenious, but it is really just leveraging off the genius of thousands of other professionals (which is much more reliable). And in fact that is what this thread and my articles in this series are all about. Indeed, it is fair to say that that is what this site and it's one million members are really all about.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • As always with such a hot topic, the great battle between the "intuitiveness" of cursors comes forward. The classic example on this thread seems to have been based on the following 2 snippets of code... (I've eliminated the cursor... just used a While Loop to drive the point home)...

    [font="Courier New"]--===== While Loop example of building million row table

    CREATE TABLE dbo.Tally1

            (N AS INTEGER NOT NULL)

    DECLARE @Counter INT

        SET @Counter = 1

      WHILE @Counter <= 1000000

      BEGIN

             INSERT INTO dbo.Tally1 (N)

             SELECT @Counter = @Counter + 1

        END

    GO

    --===== Set based example of building million row table

     SELECT TOP 1000000

            IDENTITY(INT,1,1) AS N

       INTO dbo.Tally2

       FROM Master.dbo.SysColumns sc1

      CROSS JOIN Master.dbo.SysColumns sc2

    GO

    [/font]

    Yeah, I know... simple example... but very typical of the differences between RBAR and set-based code.

    First, the RBAR has 9 logical lines of code... the set-based code is actually only 5 lines. Many arguments in favor of Cursor and While Loops suggest that the code is shorter than the much more "difficult" set-based solutions. Patently, that's not true even on more complicated code than this simple example.

    Second, the subject of "intuitiveness" always seems to surface. Supposedly, if you remove the comments I've added to the code, the While Loop is much easier to figure out the purpose of than the "black art" of set-based code. Since most people tend to think in a procedural fashion rather than a declarative fashion, I'd have to say the While Loop is definitely easier to figure out... for those who don't actually practice T-SQL. And that brings me to the next point...

    Every programming or scripting language has it's little tricks. Rather than buying a relatively expensive, highly calibrated and tested delay line for electronic systems, we used to do it in code. Since the number of bytes was also a premium, we'd do it this way...

    0100 GOSUB 0103

    0103 GOSUB 0106

    0106 GOSUB 0109

    0109 RETURN

    Each GOSUB provided 6 micro seconds of delay and each RETURN provided 4 micro seconds. If we entered the code at 0109, the GOSUB to get us there and that RETURN would provide exactly 10 micro seconds of delay. If we entered the code at address 0106, we'd get exactly 20 microseconds delay because of 1 GOSUB to enter, the GOSUB at 0106, and because of the way the stack got pushed down because of the GOSUB 0109, it would execute the RETURN twice. If you want to go through it all, entering with a GOSUB to line 0103 would provide 40 micro seconds delay and entering at 0100 would provide 80 micro seconds delay.

    It seems very strange, but that little slice of computation heaven was a lot less expensive in the number of bytes than making a loop that did the same thing.

    The key here is, that a person who knew the language (6502 uP Assembly, in this case) wrote it and once you see it and understand it, you can just look at it and know exactly what it does. That person didn't get that smart about the language through casual study. That particular person was praised back in the day when 1024 bytes of memory was considered to be a luxurious expense.

    The same holds true now. To a person well studied in T-SQL, the Cross-Join method for making a numbers table (Tally table, in this case) is absolutely and much more intuitive than the While Loop method. It's a matter of training and, perhaps, self study and experimentation.

    Finally, the subject of doing things like adding a column to identically name tables across multiple databases always seems to come up. I absolutely agree that a Cursor or While Loop will NOT cause a performance problem there and would NOT attempt to drag a DBA through a knot-hole on an outhouse door if (s)he wrote such code. But, consider this, if you actually learn how to do such a thing without a Cursor or While Loop, do you think you'll have any problems, whatsoever, with writing such complicated things as a dynamic cross-tab in a very high performance manner?

    If you want "intuitive" code in T-SQL, you have two choices... spend the time learn the high performance set-based methods so that they actually do become intuitive to you, or continue to be a casual user that writes performance challenged RBAR. Think about what your next boss would rather have before your next technical interview or your next review for a promotion or a raise.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Thomas (4/15/2009)


    RBarryYoung (4/15/2009)

    That said, it should be noted that there are also other ways to accomplish the same thing that do not use cursors.

    Agreed and almost all of those solutions involve an outside application (including SSIS) to chunk the data.

    Two points, first SSIS is NOT an outside application. It may be outside of the data engine, but it is not outside of SQL Server 2005, which is the solution environment targeted by my article.

    And secondly, it is in any event, irrelevant, the best solution freely and clearly available is what should be used. If externals were off limits then I shouldn't have to be answering questions about Email, now should I? 🙂

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Thomas (4/15/2009)


    Yep. I agree. However, recognize that we are again having to find alternative solutions because the base set based solution is insufficient.

    Some clarification may be in order here. I never said that 100% pure set-based SQL was the solution for everything. Yes, set-based is the ideal and purely procedural Cursor and WHILE loop SQL is its antithesis.

    I understand that the ideal is not yet achievable. What I am arguing for is that we can and should at least eliminate the antithesis.

    Just because pure set-based SQL may not be sufficient for a particular task does not mean that that justifies Cursors or loops in SQL. There are still a lot of miles between the two and a lot of other options that are not even on the same scale.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Oh, and as it turns out I didn't waste ANY hours coming up with this one. In fact it took me less than a minute. That's because I, along with most of the others who use it, didn't come up with it on my own, I just copied it from Jeff Moden in all of about 30 seconds.

    Saying that you spent no time coming up with the answer because you memorized it is akin to claiming a riddle is easy because you heard it and already know the answer. I suspect that most developers would have written out the while loop and had finished executing it by the time you had searched for your solution had you never seen it. Maybe you are the exception.

    Certainly, using solutions that others have sweated over is a laudable approach. Sites such as this one help with that. However, you should recognize that professionals *also* know when the time spent searching for the answer outweighs the benefit of using the tools and techniques you have available. Had you never seen anything like the solutions provided, the while loop is something almost any developer could have built without a search.

    I'm all for hearing about new tricks for avoiding the use of cursors or DBMS features that help avoid the use of cursors or improve the quality of my code. However, I also know to take said solutions with a grain of salt and understand that like the new solutions that will be presented, cursors are one of those tools for solving problems even if frowned upon.

  • Jeff,

    Look at features you need to know to write the while loop vs. the cross Join. In the case of the while loop, you need to know about the while loops, declarations, sets and the insert statement (I'm leaving out the create table statement at the moment). In the case of the cross join solution, you need to know about the top command, the identity feature, select into, the master database, the sys.columns table, the Cross Join and the trick of putting them all together which you must divine from the way the cross join works.

    It is not just about the length of the code. It is about how many common elements are used in the code. The phrase "I got in my car, drove down the block, turned left at the light and traveled to the store." is longer than "Entering my conveyance, I motored along the optimal path to the establishment of commerce." The second sentence is shorter but not as simple to read.

    Using a cross join is not itself counter-intuitive. Using it in this fashion to generate a bunch of numbers probably is to most developers.

  • Thomas (4/15/2009)


    Oh, and as it turns out I didn't waste ANY hours coming up with this one. In fact it took me less than a minute. That's because I, along with most of the others who use it, didn't come up with it on my own, I just copied it from Jeff Moden in all of about 30 seconds.

    Saying that you spent no time coming up with the answer because you memorized it is akin to claiming a riddle is easy because you heard it and already know the answer.

    Yes, that is exactly what I am saying. The best way to beat the Sphinx is to already know (or have immediate access to) every riddle and its answer. That's because professional expertise isn't a game, it's a deliberate practice and the key element of that practice is continuing education and learning. And for the record I did not have to look up Jeff's answer either, because I had already learned it as part of my own continuing education by following this site and reading some of his articles.

    Trying to beat the Sphinx by out-thinking him(?) is risky and unreliable. Trying to beat him by researching the answer after you hear the question is time-consuming and inefficient (though easier today than ever before in history). Knowing the answer ahead of time kicks *ss and wins every time.

    [font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
    Proactive Performance Solutions, Inc.
    [/font]
    [font="Verdana"] "Performance is our middle name."[/font]

  • Thomas (4/15/2009)


    Jeff,

    Look at features you need to know to write the while loop vs. the cross Join. In the case of the while loop, you need to know about the while loops, declarations, sets and the insert statement (I'm leaving out the create table statement at the moment). In the case of the cross join solution, you need to know about the top command, the identity feature, select into, the master database, the sys.columns table, the Cross Join and the trick of putting them all together which you must divine from the way the cross join works.

    It is not just about the length of the code. It is about how many common elements are used in the code. The phrase "I got in my car, drove down the block, turned left at the light and traveled to the store." is longer than "Entering my conveyance, I motored along the optimal path to the establishment of commerce." The second sentence is shorter but not as simple to read.

    Using a cross join is not itself counter-intuitive. Using it in this fashion to generate a bunch of numbers probably is to most developers.

    Oh... I agree... you have to know "the trade" very well to do stuff like this. That's the point I was ultimately trying to make. Either you know "the trade" or you don't. If you don't know the trade, you're doomed to writing slow code. And, you're absolutely correct... it's not about the length of the code. It's about the performance of the code. It's knowlege such as this that will always separate high paid developers from the others.

    The ultimate "cheat" on making such "black art" code much more intuitive to neophyte developers is simply to take the 5 seconds necessary to write a one line comment like I did in my example. That helps even cursor code because you don't actually need to read the code to understand its intent.

    Heh... as a "Catch 22"... If some of the folks that take the time to justify cursor usage actually spent that time learning the high performance set-based methods instead, they wouldn't have to justify performance challenged cursor usage.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Bruce W Cassidy (4/15/2009)


    Having said all of that, I'm curious as to how MERGE (mentioned earlier by Joe Celko) could be used in this scenario. I believe that as it is currently implemented in SQL Server 2008, it wouldn't help (as the MERGE only has the one destination, and doesn't have a "for anything that causes an error" clause).

    I am curious as well. Unfortunately my exposure to SQL Server 2008 is still very small, mostly because I am a little reluctant to just follow every new "hype". My own experience showed me that the benefits of new technology are often outweighed by the costs to replace existing technology and to train staff, including myself of course (at least nowadays, where the pressure to put new features into the market seems to be extremely high)

    Best Regards,

    Chris Büttner

  • RBarryYoung (4/15/2009)


    Christian Buettner (4/15/2009)


    Sorry I still don't get it. The ERRORFILE argument routes the errors to a text file, not to a table. Unless I misunderstand the documentation.

    Sorry, Christian, I missed that you wanted an ErrorRowsToTable solution instead of ErrorRowsToFile.

    No problem. I was not really giving a clear explanation either so thanks for the clarification.

    Best Regards,

    Chris Büttner

  • Michelle Gutzait (4/15/2009)


    ... And I was wondering... 🙂

    What are your thoughts about Adam Machanic's article:

    http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/running-sums-redux.aspx ?

    I tried to compare Adam's code to other options and came up with this one that was the closest in performance, but still a bit heavier (using Profiler):

    declare @ToID int,@ID int

    DECLARE @TransactionID INT

    DECLARE @ActualCost MONEY, @RunningTotal MONEY

    SELECT ID=IDENTITY(int,1,1),convert(INT,TransactionID) as TransactionID, ActualCost

    into #tmp

    FROM Production.TransactionHistory

    ORDER BY TransactionID

    set @ToID = @@ROWCOUNT

    create unique index idx_tmp on #tmp(ID)

    DECLARE @Results TABLE

    (

    TransactionID INT NOT NULL PRIMARY KEY,

    ActualCost MONEY,

    RunningTotal MONEY

    )

    select @ID = 1, @RunningTotal = 0

    WHILE @ID <= @ToId

    begin

    SELECT@TransactionID=TransactionID,

    @ActualCost = ActualCost,

    @RunningTotal=@RunningTotal+ActualCost

    from #tmp

    WHERE ID=@ID

    INSERT @Results

    VALUES (@TransactionID, @ActualCost, @RunningTotal)

    SET @ID = @ID +1

    end

    SELECT *

    FROM @Results

    ORDER BY TransactionID

    drop table #tmp

    It probably still can be tuned but probably not by much.

    Thanks,

    Michelle.

    Actually, you're correct. Because it is RBAR, it pretty much can't be tuned. Lynn Petis was correct though... there is a "black art anti-RBAR" technique for solving the extremely procedural running total problem. Your good code also provides me with an more complex example to re-emphasize some points I've been droning about on this thread.

    Also, the supposed "set-based" method that Adam speaks of isn't actually set-based. It's RBAR on steroids and can be millions and billions of times worse than any cursor. The following article explains why...

    http://qa.sqlservercentral.com/articles/T-SQL/61539/

    First, let's setup a million row test table that looks like your #tmp table... and, yes, we're gonna use several other "black art anti-RBAR" methods to do even that. AND it uses the very same method used for generating the table of numbers so apparent thoughout this thread...

    [font="Courier New"]--===== Create and populate a 1,000,000 row test table to simulate your original load.

         -- Column "ID" has a range of 1 to 1,000,000 unique numbers.

         -- Column "TransactionID" has a range of 1 to 5,000,000 possibly non-unique numbers.

         -- Column "ActualCost" has a range of 100.0000 to 199.9999 non-unique numbers.

         -- Column "RunningTotal" will contain the running total in ID order.

     SELECT ID = IDENTITY(INT,1,1),

            d.TransactionID,

            d.ActualCost,

            RunningTotal = CAST(NULL AS MONEY)

       INTO #tmp

       FROM (

             SELECT TOP 1000000

                    TransactionID = ABS(CHECKSUM(NEWID()))%5000000+1,

                    ActualCost    = CAST(ABS(CHECKSUM(NEWID()))%10000 /100.0 + 100.0 AS MONEY)

               FROM Master.dbo.SysColumns t1

              CROSS JOIN Master.dbo.SysColumns t2 --Lack of join criteria makes this a CROSS-JOIN

            )d

      ORDER BY d.TransactionID

    --===== Add the necessary index in the form of a clustered, unnamed PK

      ALTER TABLE #tmp

            ADD PRIMARY KEY CLUSTERED (ID)

    [/font]

    Now, let's run your good code. I had to make just a couple of alterations because of the prefilled test table I made and the fact that I don't have your transaction history table....

    [font="Courier New"]SET NOCOUNT ON --Absolutely necessary for performance where RBAR is concerned

    declare @ToID int,@ID int

    DECLARE @TransactionID INT

    DECLARE @ActualCost MONEY, @RunningTotal MONEY

    --===== Replaced this with the million row test table

    --SELECT ID=IDENTITY(int,1,1),convert(INT,TransactionID) as TransactionID, ActualCost

    --into #tmp

    --FROM Production.TransactionHistory

    --ORDER BY TransactionID

    set @ToID = 1000000 --@@ROWCOUNT --Hardcoded this just for testing

    --create unique index idx_tmp on #tmp(ID)

    DECLARE @Results TABLE

    (

    TransactionID INT NOT NULL, -- PRIMARY KEY, Removed this just for testing

    ActualCost MONEY,

    RunningTotal MONEY

    )

    select @ID = 1, @RunningTotal = 0

    WHILE @ID <= @ToId

    begin

    SELECT @TransactionID=TransactionID,

    @ActualCost = ActualCost,

    @RunningTotal=@RunningTotal+ActualCost

    from #tmp

    WHERE ID=@ID

    INSERT @Results

    VALUES (@TransactionID, @ActualCost, @RunningTotal)

    SET @ID = @ID +1

    end

    --===== We don't really want to print a million rows, do we? 😉

    SELECT TOP 1000 * FROM @Results ORDER BY TransactionID

    --SELECT *

    --FROM @Results

    --ORDER BY TransactionID

    --drop table #tmp[/font]

    Your good code produced the following stats in Profiler:

    Duration: 105,548 (105.548 seconds)

    CPU: 86,609 (86.609 seconds)

    Reads: 4,034,702

    Writes: 3,761

    RowCount: 3,001,002

    Now, the "black art" method using the "quirky update". Be warned... this method has got most of the other MVP's hoppin' including Adam. There's all sorts of claims that it uses undocumented features, can't be trusted, that it could break at the next service pack, etc, etc, ad nauseum. But it's been faithful since the only T-SQL product was known as SyBase many years ago and, despite their claims, none of them has been able to produce an example where it doesn't work correctly. The exception to that is that it won't work on a partitioned table or partitioned view as Lynn Petis wrote about in his article. The problem of parallelism has been taken care of by the MAXDOP option.

    Here's the code...

    [font="Courier New"]DECLARE @PrevActualCost MONEY,

            @RunningTotal   MONEY,

            @PrevID         INT

     SELECT @PrevActualCost = 0,

            @RunningTotal   = 0

     UPDATE t

        SET @RunningTotal = RunningTotal = @RunningTotal + ActualCost,

            @PrevID       = ID

       FROM #tmp t WITH (INDEX(0),TABLOCKX)

     OPTION (MAXDOP 1)

     SELECT TOP 1000 * FROM #tmp ORDER BY ID[/font]

    ... and, here're the stats...

    Duration: 14,131(14.131 seconds)

    CPU: 13,328 (13.328 seconds)

    Reads: 3,193,408

    Writes: 2

    RowCount: 1,001,001

    Michelle, the following bit of a rant is NOT directed at you. Please take no offense.

    Yes, yes, YES! It uses all sorts of non-intuitive stuff. In fact, it's ALL "black art" stuff. For production code, I'd certainly add some comments as to what's going on so that it's not only documented, it becomes a teaching tool in itself. So far as other developers not finding the (missing here) documentation sufficient, a little time with Books Online and the internet should suffice.

    And, although I said it's all "black art" stuff, look how simple and intuitive it actually is. It very strongly resembles the While Loop code. The only thing that's not intuitive can almost be guessed by the clustered index hint... it uses the clustered index to control the row order of the UPDATE which is actually a thing known in the "black arts" as a "pseudo-cursor".

    The bottom line is, justifying performance challenged code, no matter what form it may take, by saying it's more intuitive is just a pile of rubbish. Folks need to get good at what they do. I'll not intentionally cripple a server or disk system just so someone can enjoy intuitive code instead of actually learning how to do their bloody job right.

    I agree... there are places for cursors... those would be the very rare places where they are actually better than set base code and that's a whole lot more rare than most people can even imagine.

    We now return you to our regularly scheduled program... 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Mr. Young... very nice job. Immensely entertaining article and follow up posts to boot. My hat is off to you, Sir... I know just what you're going through trying to keep up with all the posts on this wonderful thread. Thank you for taking the time to write such a great article. I can hardly wait for part two.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Oh yeah... almost forgot... the "black art" running total method requires that you have the proper clustered index to work. It will not work correctly if you reference a non-clustered index and the implicit ORDER BY enjoyed by the "quirky update" is totally unreliable for SELECT's. To guarantee the order of a SELECT, you MUST use an ORDER BY... SQL Server will figure out what's best behind the scenes as you see that a SELECT with an ORDER BY on the columns contained in the clustered index will frequently NOT cause a sort in the execution plan, but you MUST include the ORDER BY in SELECT's to guarantee the order of the rows returned.

    For those who don't know what a "pseudo-cursor" is, here's one of the simplest answers I can give...

    SELECT columnlist

    FROM sometable

    Think about what that little slice of computational heaven does behind the scenes and you'll understand why the T-SQL Ninja's on this site call it a "pseudo-cursor". And, sorry if it's not intuitive, but it should be. 😛

    Damn... I hope I'm not stealing any of Barry's thunder for part II.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • RBarryYoung (4/15/2009)


    GermanDBA (4/15/2009)I know that if the series has the same quality that RBarryYoung produces here on a daily basis, it will be a wonderful primer for quite a few database developers that I know.

    Thanks, German (?), it's great to hear complements like that. 🙂

    Credit where credit is due!! In our profession it seems that we get little/no praise when things are running well, but a sound kicking when things don't work.

    P.S.

    GermanDBA is just a forum name. Quite a bad choice really, as I'm an englishman in Germany. I have often thought about changing my forum name, but don't know how other than re-registering and thus losing my points (although the logner I wait, the worse this gets!).

    Regards,

    WilliamD

Viewing 15 posts - 196 through 210 (of 380 total)

You must be logged in to reply to this topic. Login to reply