Temporary Tables

  • Hello, I was hopeing someone could point me to an article or website that has some in-depth information about Temporary Tables. I have only been working with SQL for about three months, and my boss wants me to start useing them. Thanks in advance. πŸ™‚

    [font="Comic Sans MS"]Being "normal" is not necessarily a virtue; it rather denotes a lack of courage.
    -Practical Magic[/font]

  • Hi,

    in all honesty, I'm puzzled as to why your boss is demanding you use Temp Tables. Personally, I'd consider these a last resort other than the dreaded cursor. Perhaps it's an "Everyone else's used them, so you should".

    Personally, I'd first use derived tables instead, then either (if the optimiser is going a bit mad - it does sometimes) a UDF or Table Variable (hint here, you can set up a clustered index by declaring a primary key - can't use create index though) - temp tables suck IMHO and should be avoided if at all possible

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • Temporary tables can actually be quite valuable. I don't know of a specific single-source of data about using them. Best bet is starting with Books Online and doing a couple of searches on this site and sites like this one.

    Despite the other post on this thread, I would recommend temp tables over table variables in most situations, and over table-value UDFs.

    The reason I make that recommendation is because temp tables, like permanent tables, can maintain statistics, and can have indexes. This means that the SQL engine can optimize queries against them, and can have a much better idea of what type of join, etc., to use. Thus, they often perform better than table variables or UDFs.

    Very small data sets (under 200 rows, I think) don't use stats or indexes, so table variables and UDFs are valid in those cases. If you can guarantee that you won't have enough rows to warrant stats or indexes, then use whichever is more comfortable.

    The main advantage to table variables is that they aren't directly subject to rollback commands, which means you can insert error-related data into them, do a rollback of the transaction, and have the error data available for logging, etc. Outside of that, temp tables are generally better for performance.

    Another use I've had for table variables is pulling data from remote servers without the two-way distributed transaction process. If avoiding distributed transactions is acceptible for a certain process, table variables can speed it up in those cases. (Used incorrectly, this can result in corrupted data, transactional violations, and other problems. Used correctly, it can speed up remote data pulls.)

    There used to be issues with temp tables causing lots of locking in tempdb, but those have been solved for nearly a decade now.

    As far as use, temp tables are mainly for processing things one step at a time, without ending up doing it one row at a time. For example, if you need to collect data from multiple tables, perform processes on it, and then dump it into another set of tables, a temp table can speed that up and make it more understandable (and thus more debuggable), by allowing you to insert into the temp table, then perform updates on the temp table, then do inserts from the temp table.

    A good example of this is in Jeff Moden's article on running totals, http://qa.sqlservercentral.com/articles/Advanced+Querying/61716/. He uses a temp table in there for that kind of processing. (He calls it "divide and conquer" when he uses temp tables that way.)

    Don't over-use temp tables, but they do have some uses that are quite handy and will improve performance in many situations.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • I'm not saying never use them - hey over the last decade I've even used cursors 3 or 4 times :w00t:. But in general - I find derived tables easier to debug in general as you can just select the relevant section and run it in isolation and see the results. Also - you're pulling the data for your temp table/tablevar from a query anyway, so ........

    Now obviously this doesn't always work best, sometimes the optimiser will, for rather complex queries ignore your carefully constructed query structure and just roll the bugger up into one huge lump - which can give some mad query plans. I've had 5 or 6 derived tables joined, all running in under a second turn into an agglomorated monster taking a shocking minute or so. Including a bookmark lookup suddenly appearing as if from nowhere. Now in this case, obviously you have to take an alternative approach to force it down your route.

    Thing is though - and I've had some pretty strong responses telling people this, you *can* index a table variable :cool:. You can't use Create Index - that's a stone cold fact - but you can create a clustered index by setting a primary key. Just try it out with a quick test and check the query plan. Obviously when you do this you'd be wise to ensure the query you use to populate it pulls the data out in the order of the clustered index.

    Certainly, also, if your query is joining on a large dataset in several of the derived tables, you'd usually be better off doing the query once, storing the data and joining on that.

    But - and I suppose it's a question of personal style - I generally prefer the derived table route.

    On balance though - if it's fast enough, readable and maintainable enough, it's OK. If I'm honest, possibly my reflex aversion to temp tables might be as a result of them having been used to generated sp's of such occluded inanity in some of the stuff I've worked with that when I see them my first reaction is sometimes to want to run from the room screaming. (And before you curse me to the depths of going back to working with Informix - I've seen your posts, know how good you are, please do not think I'm suggesting that this would be the case with your code)

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • andrew gothard (8/11/2008)


    . But in general - I find derived tables easier to debug in general as you can just select the relevant section and run it in isolation and see the results. Also - you're pulling the data for your temp table/tablevar from a query anyway, so ........

    Depends on the query. Sometimes it's faster to do it all in one, sometimes its faster to break pieces of the query out into temp tables, index those, then join onwards. If I'm writing complex queries I make a point of testing both ways.

    Thing is though - and I've had some pretty strong responses telling people this, you *can* index a table variable :cool:. You can't use Create Index - that's a stone cold fact - but you can create a clustered index by setting a primary key.

    Sure you can. A single index, either clustered or nonclustered comprising the primary key. It still doesn't have statistics though. Check the exec plan and look at estimated rows.

    DECLARE @test-2 TABLE (

    id int identity primary key clustered,

    test varchar(50)

    )

    insert into @test-2 (test)

    select name from sys.columns

    select * from @test-2 where id > 50

    Using sys.columns cause it should have more than 50 rows. My test DB, I get estimated rows 1, actual 460. That cardinality inaccuracy is what really causes problems with table variables.

    The optimiser uses the cardinality to calculate the cost of various operations. If the estimate is wrong, its costing is wrong and you can get very, very bad execution plans.

    Generally I suggest no more than 100 rows in a table variable that is going to be part of a query later (if you're just going to select * from without any joins, it isn't too bad). If you need more than that, use a temp table. The disadvantages are usually minimal.

    If I'm honest, possibly my reflex aversion to temp tables might be as a result of them having been used to generated sp's of such occluded inanity in some of the stuff I've worked with that when I see them my first reaction is sometimes to want to run from the room screaming.

    :hehe: See GSquared's sig line.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Most of what I do uses derived tables as well. Actually, I use CTEs for those these days, but it's the same concept.

    My point is merely that temp tables do have good uses, and it's not quite like cursors. Cursors are a "last resort" or a "because of factors outside of the database, row-by-row is the only way possible" (like calling external DLLs or sp_send_dbmail). I've used a lot more temp tables in the last month than I've used cursors. I've used a lot more CTEs than both of those put together. And a lot more just plain "update table", "select from table", etc., than all of those put together.

    On the point of indexing a table variable by having a primary key in it, yeah, you can. That's pretty limited compared to what you can do with a temp table. Constraints, non-clustered indexes on non PK columns, DDL commands (adding columns, etc.), all okay on temp tables, and not on table variables.

    On temp tables being used insanely: Yeah! I recently ran into a query that used twenty-one temp tables and six cursors. I changed the whole thing into a single CTE and one update command based on that. Could have been done with an SQL 2000 derived table and an update from that. Went from running for 3 hours every night to running for less than a second every night. Ouch!

    There are reasons I have the sig that I do. πŸ™‚

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Using sys.columns cause it should have more than 50 rows. My test DB, I get estimated rows 1, actual 460. That cardinality inaccuracy is what really causes problems with table variables.

    Aahhhh - good spot, thanks

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • Getting back to the original question, fascinating as the discussion is...

    purplebirky (8/11/2008)


    Hello, I was hopeing someone could point me to an article or website that has some in-depth information about Temporary Tables. I have only been working with SQL for about three months, and my boss wants me to start useing them. Thanks in advance. πŸ™‚

    What kind of info are you looking for on temp tables?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • On temp tables being used insanely: Yeah! I recently ran into a query that used twenty-one temp tables and six cursors. I changed the whole thing into a single CTE and one update command based on that. Could have been done with an SQL 2000 derived table and an update from that. Went from running for 3 hours every night to running for less than a second every night. Ouch!

    There are reasons I have the sig that I do. πŸ™‚

    Can work to your advantage though - first thing I was asked to look at in my current post was a web facing query that ran for over 20 min. New boss was quite impressed with the rewrite coming in at a second - ust a case of getting rid of the same type of garbage

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • Thanks for the info. It's been helpfull.

    [font="Comic Sans MS"]Being "normal" is not necessarily a virtue; it rather denotes a lack of courage.
    -Practical Magic[/font]

  • I agree, partially... Temp tables shouldn't be used just because they can be. But, remember that a "derived table" and CTEs create "working" tables which appear in TempDB just like a Temp Table and they don't persist for more than a single query. Temp tables are valuable if you want to isolate and work with a greatly reduced set of data throughout a give sproc. Used properly, they can bring a 24 hour job down to 15 minutes (personal experience). πŸ™‚

    The biggest problem I've seen with Temp Tables is where people do a CREATE or SELECT/INTO in the middle of a query. If you look at some of the performance tuning hints in BOL, they tell you never mix DDL in DML... do all the DDL at the front of a proc before any DML occurs. If you mix DDL in DML, I can almost guarantee a recompile or two...;)

    Also, if anyone thinks that Table Variables are "memory only", you might want to take a look at Q3/A3 aqnd Q4/A4 in the following link...

    http://support.microsoft.com/default.aspx?scid=kb;en-us;305977&Product=sql2k

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • hello

    temporary tables are least in use as they consume the memory resource

    Instead of temporary tables you can use the table variable as the table variable is just like any another variable we declare in sql

    therefore i recommended to go for table variable instead of temporary tables

    the use of the temporary table and table variable is depend upon the no of records under Transaction .if the records are more than 1000 than go for temporay tables

    i specify the syntax for both

    create table #abc ( col1 as bigint,col2 as varchar(60))

    insert into #abc

    select col1,col2 from xyz

    drop table #abc

    It is very necessary to drop the temporaarytables otherwise it continously using the memory resources and u can find your #abc temporary tables in tempdb Database

    if u want to use table varaiable

    declare @abc as table

    (

    col1 biginr,

    col2 varcahar(60)

    )

    insert into @abc

    i Hope this will help u πŸ™‚

  • kiranbgiet (8/11/2008)


    hello

    temporary tables are least in use as they consume the memory resource

    Instead of temporary tables you can use the table variable as the table variable is just like any another variable we declare in sql

    That's a very common myth. Unfortunately, it's not true. Look at the link that Jeff posted above, look at my brief explanation:

    http://sqlinthewild.co.za/index.php/2007/12/19/temp-tables-and-table-variables/

    Both temp tables and table variables are created within tempDB and assigned entries in the TempDB system tables. Both are memory resident unless memory pressure forces SQL to write them to the disk (tempDB)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Jeff Moden (8/11/2008)


    ...The biggest problem I've seen with Temp Tables is where people do a CREATE or SELECT/INTO in the middle of a query. If you look at some of the performance tuning hints in BOL, they tell you never mix DDL in DML... do all the DDL at the front of a proc before any DML occurs. If you mix DDL in DML, I can almost guarantee a recompile or two...;) ....

    Actually, some of that is changed in SQL 2005. Check out: http://www.microsoft.com/technet/prodtechnol/sql/2005/recomp.mspx#EDOAC

    There's a section, about 3/4 of the way through the article, on Recompilations due to mixing DDL and DML.

    Even if you put all your DDL at the beginning, you get recompiles, in 2000 and 2005, per this article. 2005 just reduces the scope of the recompiles from proc level to statement level.

    The whole article is a bit of a heavy read, but worth it.

    A closely related article: http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx

    One of the things it says is:

    In SQL Server 2000, when SQL Server recompiles a stored procedure, the entire stored procedure is recompiled, not just the statement that triggered the recompile. SQL Server 2005 introduces statement-level recompilation of stored procedures. When SQL Server 2005 recompiles stored procedures, only the statement that caused the recompilation is compiledβ€”not the entire procedure. This uses less CPU bandwidth and results in less contention on lock resources such as COMPILE locks.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Thanks again, mainly I was looking for info cause I (so far) have had one time that I need to use a temp table. My boss wants me to learn about them, how and when to used them properly. He's not demanding that I use them but if I should need to, he wants me to do it right.

    --------------------------------------------------------------------------------

    Being "normal" is not necessarily a virtue, It rather denotes a lack of courage.

    -Practical Magic

    :hehe:

    [font="Comic Sans MS"]Being "normal" is not necessarily a virtue; it rather denotes a lack of courage.
    -Practical Magic[/font]

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply