Temp tables versus virtual tables

  • I am having a dispute with a collegue over what would be faster, using a temporary table(#table) or a virtual table(@table). I cannot find a decent question to use on google, would anybody here be able to help me out?

    Greetz,
    Hans Brouwer

  • The short answer is:  It depends.

    • How many rows are going to be in table?
    • How much manipulation is going to take place?

    @TABLE will write to DISK once it is big enough

    #TABLE always writes to DISK

    @TABLE can have a PK

    #TABLE can have PK and indexes....

    When in doubt TEST, TEST, TEST



    Good Hunting!

    AJ Ahrens


    webmaster@kritter.net

  • Everything the last post said and...

    Statistics aren't generated for table variables so their use in queries can, depending on their size, cause performance problems.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • If your table(#temp) is really big , it is advicable to use Temp tables rather than table variable(@table)

    Regards,

    Sudheer

     






    Regards,
    Sudheer 

    My Blog

  • Tnx for the info, all.

    I am under the impression, that when really big tables are involved it is better to create a physical table than a virtual table(# or @).

    Greetz,
    Hans Brouwer

  • I think it depends on your system resources when considering large tables. If you check cpu and memory usage and you have a lot of memory free ( assuming you calculate how big your virtual table will be) a virtual table will be faster since there's no disk io. However, the minute the process has to start paging out memory, performance goes downhill fast. I don't believe there can be an axiom for a generalized topic like this one. There are too many considerations.  Sorry to stay on the fence. 🙂

    Regards,

    JSherburn

     

  • All true.

    However, the problems we ran into weren't related to I/O or memory, but rather when using the table variables (@ tables) in joins with each other and regular tables, the lack of statistics in the table variables made for really poor query plans that seriously impacted performance despite the fact that the processor and memory on the machine were more than adequate to the task. So the problems weren't just when the table variables became "large" we were hitting issues at 100+ rows when joined against tables with 100+ rows because the joins used were optimized based on one row since there are no statistics. 

    There are some really good reasons to avoid using table variables and instead use temp tables however, it does go back to how they're used. If you were simply using the table variable as some sort of aggregating mechanism that was simply a repository of data and not another table for joins, then table variables are useful. So the answer still goes back to the dba fallback position, it depends.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply