tempdb curiosity

  • I maintain a sql server 2012 server with 4 tempdb files.  The first one is on a different partition then the other 3.     The initial file sizes for the 4 are 80 gig for the first file and 30 for the others.      Don't ask me if this makes sense.

    I am curious how these initial file sizes are changed.   Can SQL Server do this or is it a manual process?
    Does it actually make sense to maintain the files on different partitions and with different sizes?  
    I thought they should be the same size on a dedicated partition.

    And lastly:  When the server is restarted I read the file size is changed.  How exactly does this work?  

    Blessings.

    When the snows fall and the white winds blow,The lone wolf dies but the pack survives.

    Once you've accepted your flaws, no one can use them against you.

  • It's a manual process to change the starting sizes for the files. SQL won't do it.

    TempDB data files should all have the same initial size and same autogrow settings. They can be on different drives, though that's normally a high-end scale out of IO throughput (and moving TempDB to SSD is probably easier these days)

    When SQL restarts, the tempDB files are set back to their starting size.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Oops.  One more:
    Does it make sense to change the initial file sizes for the tempdb database in the gui?    Since SQl does not change this,
    it appears likely a rogue in our group did it.

    Alternatively, I could restart the SQL server and set initial sizes using the dbcc shrinkfile command.

    When the snows fall and the white winds blow,The lone wolf dies but the pack survives.

    Once you've accepted your flaws, no one can use them against you.

  • Set it in the GUI (or via ALTER DATABASE), then on the next restart TempDB will start with those values. Check autogrow settings as well, should be the same on all files.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Also, with a tempdb anywhere near that size, I would make sure that Instant File Initialisation is set on.  If not, you're going to be twiddling your thumbs for some time waiting for zeroes to be scribbled all over your drives.
    Instant file initialisation, how and why
    (your instance can't come online until tempdb has it's space allocated). 
    Also, make sure that autogrowth is not set to a percentage, this is a stunningly terrible idea.
    hth

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • andrew gothard - Wednesday, April 19, 2017 2:32 AM

    Also, with a tempdb anywhere near that size, I would make sure that Instant File Initialisation is set on.  If not, you're going to be twiddling your thumbs for some time waiting for zeroes to be scribbled all over your drives.

    Yes, although TempDB is cleared on a restart, not dropped and recreated. If the files are already there, they don't get zeroed out.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Wednesday, April 19, 2017 3:16 AM

    andrew gothard - Wednesday, April 19, 2017 2:32 AM

    Also, with a tempdb anywhere near that size, I would make sure that Instant File Initialisation is set on.  If not, you're going to be twiddling your thumbs for some time waiting for zeroes to be scribbled all over your drives.

    Yes, although TempDB is cleared on a restart, not dropped and recreated. If the files are already there, they don't get zeroed out.

    Interesting.  Now kb 307487 states it should.  It's comment on recreating tempdb on startup ties in with exactly what I thought happened "When SQL Server starts, tempdb is re-created by using a copy of the model database, and tempdb is reset to its last configured size", and I do recall back in days of yore (2000 boxes) an instance with a huge tempdb (like an ETL box) would take an age to come up.  But if you put a table in Model and bounce it, that table isn't there in TempDB afterwards.  Happy soul that I am, I no longer have any 2000 2005 stuff about anymore, and that article's for 2005, so has this changed in 2008+? 
    Or have I always been talking rubbish about that ...

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • I dug into this a while back while testing how to recover from a crashed instance.

    If the TempDB files are there, they're reset to the starting size of the TempDB files, the database is cleared and settings are copied over from model (and tables created in model should end up in TempDB...). This is done by copying allocation pages and data pages from model
    If the tempDB files aren't there, the model files are copied and resized (and that will require zeroing out).
    Note that this is internal details and subject to change without warning. The end result is the same as if the kb article was correct.

    This, afaik, hasn't changed since SQL 2005. I have no idea of the behaviour before that point.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster - Wednesday, April 19, 2017 4:51 AM

    I dug into this a while back while testing how to recover from a crashed instance.

    If the TempDB files are there, they're reset to the starting size of the TempDB files, the database is cleared and settings are copied over from model (and tables created in model should end up in TempDB...). This is done by copying allocation pages and data pages from model
    If the tempDB files aren't there, the model files are copied and resized (and that will require zeroing out).
    Note that this is internal details and subject to change without warning. The end result is the same as if the kb article was correct.

    This, afaik, hasn't changed since SQL 2005. I have no idea of the behaviour before that point.

    Thanks

    I'm a DBA.
    I'm not paid to solve problems. I'm paid to prevent them.

  • With all the back and forth, I would like to reiterate that you REALLY should have IFI enabled (unless you have some REALLY esoteric security requirement).

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • fizzleme - Tuesday, April 18, 2017 4:34 PM

    I maintain a sql server 2012 server with 4 tempdb files.  The first one is on a different partition then the other 3.     The initial file sizes for the 4 are 80 gig for the first file and 30 for the others.      Don't ask me if this makes sense.

    While I certainly agree that TempDB should be sized for the anticipated usage, 170 GB preallocated seems pretty high and, if actually is needed, indicates that there's something pretty bad going on with the code.  I have a 2TB production system and most of that is across 2 databases.  I setup TempDB to have 8 2GB files and a 2GB ldf 4 years ago and it hasn't budged since then.  The system has 32 CPUs and started out at 128GB of RAM, which we upgraded to 256 less than a year ago.

    My recommendation would be for you to (at least temporarily) set the initial size to 2GB on all the files with a growth of only 1GB each to find out what your needs actually are.  If the files do end up growing to 30GB (for example), then it's very likely that you have some bad code causing such growth.  You might also want to take the time to set something up that will capture what's making TempDB grow so that you know which code needs fixing.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Jeff Moden - Wednesday, April 19, 2017 8:38 AM

    fizzleme - Tuesday, April 18, 2017 4:34 PM

    I maintain a sql server 2012 server with 4 tempdb files.  The first one is on a different partition then the other 3.     The initial file sizes for the 4 are 80 gig for the first file and 30 for the others.      Don't ask me if this makes sense.

    While I certainly agree that TempDB should be sized for the anticipated usage, 170 GB preallocated seems pretty high and, if actually is needed, indicates that there's something pretty bad going on with the code.  I have a 2TB production system and most of that is across 2 databases.  I setup TempDB to have 8 2GB files and a 2GB ldf 4 years ago and it hasn't budged since then.  The system has 32 CPUs and started out at 128GB of RAM, which we upgraded to 256 less than a year ago.

    My recommendation would be for you to (at least temporarily) set the initial size to 2GB on all the files with a growth of only 1GB each to find out what your needs actually are.  If the files do end up growing to 30GB (for example), then it's very likely that you have some bad code causing such growth.  You might also want to take the time to set something up that will capture what's making TempDB grow so that you know which code needs fixing.

    Isn't your system mostly A) BLOBs and B) well-tuned OLTP in nature though Jeff?

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Jeff Moden - Wednesday, April 19, 2017 8:38 AM

    While I certainly agree that TempDB should be sized for the anticipated usage, 170 GB preallocated seems pretty high and, if actually is needed, indicates that there's something pretty bad going on with the code. 

    Not necessarily. It's going to depend on the workload. More analytic-type workloads generally chew more TempDB space

    I have a 2TB production system and most of that is across 2 databases.  I setup TempDB to have 8 2GB files and a 2GB ldf 4 years ago and it hasn't budged since then.  The system has 32 CPUs and started out at 128GB of RAM, which we upgraded to 256 less than a year ago.

    Back on SQL 2000 I had a 1TB database that needed (and used) 300GB TempDB because of the data loads we did every night.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • We have a 200 gig database set to simple mode.  In a few hours it created 50 gigs of transaction logs.  Impressive.

    When the snows fall and the white winds blow,The lone wolf dies but the pack survives.

    Once you've accepted your flaws, no one can use them against you.

  • TheSQLGuru - Wednesday, April 19, 2017 8:59 AM

    Jeff Moden - Wednesday, April 19, 2017 8:38 AM

    fizzleme - Tuesday, April 18, 2017 4:34 PM

    I maintain a sql server 2012 server with 4 tempdb files.  The first one is on a different partition then the other 3.     The initial file sizes for the 4 are 80 gig for the first file and 30 for the others.      Don't ask me if this makes sense.

    While I certainly agree that TempDB should be sized for the anticipated usage, 170 GB preallocated seems pretty high and, if actually is needed, indicates that there's something pretty bad going on with the code.  I have a 2TB production system and most of that is across 2 databases.  I setup TempDB to have 8 2GB files and a 2GB ldf 4 years ago and it hasn't budged since then.  The system has 32 CPUs and started out at 128GB of RAM, which we upgraded to 256 less than a year ago.

    My recommendation would be for you to (at least temporarily) set the initial size to 2GB on all the files with a growth of only 1GB each to find out what your needs actually are.  If the files do end up growing to 30GB (for example), then it's very likely that you have some bad code causing such growth.  You might also want to take the time to set something up that will capture what's making TempDB grow so that you know which code needs fixing.

    Isn't your system mostly A) BLOBs and B) well-tuned OLTP in nature though Jeff?

    Good memory but that's the "Phone system" you speak of.  That system has a single TempDB file of only 700MB .  What I described above is our main business system and it does both OLTP and some heavy lifting in the area of batch ETL in several different aspects.  It also does a whole lot of batch processing being the scenes prior to final injection to its final resting place on a different database on the same machine.  The "ETL/Batch" DB is just a bit over 1.12TB (largest table in the ETL and other batch processes is nearly a quarter of that) and the OLTP database that receives from that is 625GB where the largest non-audit table is 146GB.

    You are also correct on the "well-tuned OLTP" nature of things and that's a large part of the reason why TempDB hasn't grown.  When I first got here, that wasn't the case.  We had several bad queries (some batch, some reporting) that would not only lock 16 CPUs at a time but also generated more than 50GB of TempDB each.  Without exception, they all turned out to be many-to-many queries with DISTINCT to overcome the duplication in the output.  We either fixed the incorrect criteria or did some "Divide'n'Conquer" using (ironically) Temp Tables to seriously whittle down the core of some monster queries.  We also had about 30 front end queries that kept logical and physical IO hopping and the normal average CPU usage was usually at 40% across them all when a batch wasn't running.

    After the code fixes we made, all of that, including the need for large TempDB files, went away.  Peer reviews and performance testing have mostly kept it that way and any surprises are dealt with quickly so that they don't usually become a problem.  (We had one slip through that I warned them was going to be a problem almost 3 years ago... it alone has cause a recent "tipping point" increase across 32 CPUs by about 14%... nasty bit of ORM code that isn't scalable).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply