Table variable vs temp table

  • I got it wrong, so I tested it on my own system. (2008 R2 - 2005 not available)

    Results for table variable, in ms: 80800, 80203, & 79523

    Results for temp table, in ms: 463, 450, 470

    So, now I'm a bit confused. Just when am I supposed to use table variables??

    [font="Verdana"]Please don't go. The drones need you. They look up to you.[/font]
    Connect to me on LinkedIn

  • Oops, where are my manners. Thank you sir, may I please have another? (Question, that is.)

    [font="Verdana"]Please don't go. The drones need you. They look up to you.[/font]
    Connect to me on LinkedIn

  • The Inserts both execute at about the same speed. It is the "Select where not in" statement that is considerably slower with the table variable. So I think the point of the question was not how fast the insert runs but how fast the select runs.

    Tony
    ------------------------------------
    Are you suggesting coconuts migrate?

  • You guys really need to learn english. There was rarely, if ever, a better written question.

    It clearly stated to execute 3 batches separately with begin / end points in the code.

    I just would like to add to the short explaination that the recompile speeds up the query because the row estimation is better which gives a different & better plan to execute.

    Table variables estimate to 1 row which is way wrong in this case and causes the "bad" plan.

  • yogesh.balasubramanian (10/3/2011)


    I see a relavent article and it seems like table variable is much faster than temp table.

    Ref: http://sqlserverperformance.idera.com/uncategorized/performance-comparison-temp-tables-table-variables-sql-server/

    :unsure: Confused on which one is better?

    One item that the question hoped to illustrate is that like most things in SQL SERVER (T-SQL) "IT DEPENDS"

    For example - additional testing using SQL Server 2008

    Processing Time (in milliseconds)

    #Rows Table Variable Temp Table

    20,000 68,773 386

    2,000 633 70

    200 33 23

    20 16 20

    Add in further modifications to the Temp Table, such as an index, and the results change again. So in the real world, do not depend upon

    assumptions, or what others say is the best method. Test YOUR solution and test it again and again to insure it is the better solution before placing it in a production DB

    If everything seems to be going well, you have obviously overlooked something.

    Ron

    Please help us, help you -before posting a question please read[/url]
    Before posting a performance problem please read[/url]

  • Interesting question and answer, I learned something today. Thanks.

  • The answer is correct but the explanation ("The reason is that the table variable causes a recompilation") is 100% wrong.

    The reason the table variable code is so slow is precisely because it does not produce a recompile -- as mentioned in the very blog article linked to in the answer.

    So the temp table step recompiles and runs based on an estimated (and actual) 20,000 rows.

    The table variable step does not recompile and runs based on an estimated 1 instead of an actual 20,000 rows -- producing an inefficient query plan, leading to a slow query.

  • Thomas Abraham (10/3/2011)


    I got it wrong, so I tested it on my own system. (2008 R2 - 2005 not available)

    Results for table variable, in ms: 80800, 80203, & 79523

    Results for temp table, in ms: 463, 450, 470

    So, now I'm a bit confused. Just when am I supposed to use table variables??

    In my opinion, very very little.

    Need to use them in UDF's, need to use them as Table Valued Parameters to a stored proc. Aside from that, I never use them. Too many benefits temp tables give that table variables don't. While I rarely see enough of a performance benefit from putting non-clustered indexes on a temp table, sometimes the cost of the index is less than the time saved by the query because of the index. Temp tables have statistics as well. The biggest reason people seem to use them has to do with the myth that table variables don't go to disk.



    --Mark Tassin
    MCITP - SQL Server DBA
    Proud member of the Anti-RBAR alliance.
    For help with Performance click this link[/url]
    For tips on how to post your problems[/url]

  • sknox (10/3/2011)


    The answer is correct but the explanation ("The reason is that the table variable causes a recompilation") is 100% wrong.

    The reason the table variable code is so slow is precisely because it does not produce a recompile -- as mentioned in the very blog article linked to in the answer.

    So the temp table step recompiles and runs based on an estimated (and actual) 20,000 rows.

    The table variable step does not recompile and runs based on an estimated 1 instead of an actual 20,000 rows -- producing an inefficient query plan, leading to a slow query.

    That makes more sense to me, especially in light of the results. As the table size gets smaller, i.e. closer to the estimated size of 1, it begins to run as quickly as the optimizer "expected".

    While I would love to thoroughly test the performance of all possible versions of every proc I write, I'm not given that much time. So, while there are times where I can do so, in most cases, it's useful to have rules of thumb that I can rely on, and only have to test alternatives when the performance from using these rules is different than expected.

    For example, from the results I saw in the testing suggested, I might conclude that I can "safely" use a table variable for small result sets, or for data sets that will not be used in large joins.

    [font="Verdana"]Please don't go. The drones need you. They look up to you.[/font]
    Connect to me on LinkedIn

  • Thomas Abraham (10/3/2011)


    sknox (10/3/2011)


    The answer is correct but the explanation ("The reason is that the table variable causes a recompilation") is 100% wrong.

    The reason the table variable code is so slow is precisely because it does not produce a recompile -- as mentioned in the very blog article linked to in the answer.

    So the temp table step recompiles and runs based on an estimated (and actual) 20,000 rows.

    The table variable step does not recompile and runs based on an estimated 1 instead of an actual 20,000 rows -- producing an inefficient query plan, leading to a slow query.

    That makes more sense to me, especially in light of the results. As the table size gets smaller, i.e. closer to the estimated size of 1, it begins to run as quickly as the optimizer "expected".

    While I would love to thoroughly test the performance of all possible versions of every proc I write, I'm not given that much time. So, while there are times where I can do so, in most cases, it's useful to have rules of thumb that I can rely on, and only have to test alternatives when the performance from using these rules is different than expected.

    For example, from the results I saw in the testing suggested, I might conclude that I can "safely" use a table variable for small result sets, or for data sets that will not be used in large joins.

    The key is to test. If I'm not going to index beyond adding PK I start with a table variable and convert if I have performance issues. Also you can look at what is your bottleneck. If it is CPU perhaps you don't want the recompile.

    Honestly I probably use table variables too much and this question shows that I should reconsider my practices. When I ran the code the temp table was about 100+ times faster (2008R2).

  • Thanks to all, including Ron, Mark & Jack, for their responses.

    I had just switched over to using more table variables, and will be more reluctant to use them going forward.

    [font="Verdana"]Please don't go. The drones need you. They look up to you.[/font]
    Connect to me on LinkedIn

  • Nice question. I heard this was supposed to change in 2008.

    Not suprised it did not.

    Becuase of this we always remove table variables from any code we get.

    Table Variables have a use and need in some functions, but in a stored procedure they have little to no use.

  • SanDroid (10/3/2011)


    Nice question. I heard this was supposed to change in 2008.

    Not suprised it did not.

    Becuase of this we always remove table variables from any code we get.

    Table Variables have a use and need in some functions, but in a stored procedure they have little to no use.

    I'll agree to disagree. I think it really depends on what they are being used for in the stored procedure. If you are trying to capture information using the OUTPUT clause of an INSERT/UPDATE/DELETE/MERGE operation regardless of a commit or rollback of the operation, then you do need the table variables.

    There may also be other reasons, but it really comes down to testing, testing, and more testing.

  • Lynn Pettis (10/3/2011)


    . . . There may also be other reasons, but it really comes down to testing, testing, and more testing.

    Absolutely correct, Lynn.

  • Also, thanks for an interesting question: performance has been my professional hobby for years.

Viewing 15 posts - 16 through 30 (of 51 total)

You must be logged in to reply to this topic. Login to reply