High CPU usage due to bad execution plan occurs unexpectedly

  • Actually I much prefer FtF for the first stage of any significant engagement too. But for "hey, this one thing is broken and I need help on it now" type of stuff VPN is the only reasonable way to go most of the time.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru (7/12/2011)


    Actually I much prefer FtF for the first stage of any significant engagement too. But for "hey, this one thing is broken and I need help on it now" type of stuff VPN is the only reasonable way to go most of the time.

    So to recap, you don't speak french :Whistling:.

    Glad we got it out in the open! ๐Ÿ˜€

  • TheSQLGuru (7/12/2011)


    Ninja's_RGR'us (7/11/2011)


    sdelachapelle (7/11/2011)


    Hi, thanks for the proposal but I work in Paris ๐Ÿ˜‰

    Anyway, I think there was at least one unanswered question about why dbcc updateusage 'prevents' the problem. I doubt that it does 'prevent' it. It flushes the procedure cache for that table's plans and that is probably why things seem to run better for a while.

    Hi Guys (Salut les gars :discuss: ) ๐Ÿ˜‰

    I ended up to the same conclusion that dbcc updateusage was flushing somehow the plan cache for queries involved with that table.

    Actually we can't afford to investigate further on this issue and since the problem is "handled" with the query hint "OPTION (Hash Join)" we'll leave it for the moment.

    But I agree with you than it doesn't solve the problem globally. However I doubt than we have a weird hidden issue here. The table is updated massively and constantly therefore it makes sence than statistics get outdated very quickly and than we can have sub-optimal execution plan, we just need to be aware of it (in my opinion).

    Cheers.

  • Then your problem is not solved. Hash join is likely not always the most efficient option.

    If the table truely gets updated that often you can set update stats in a job to run every couple hours. I've heard of a case where full scan update was required every 15 minutes to keep the plans from going bad.

    I never had to use hints in production and I don't think you need one here either.

  • You are right that won't always be the most efficient option.

    However I performed several benchmarks and between the two queries (with and without hint) it tends to only have 200/300 ms of difference which is far acceptable to us.

    As for a full statistics update every 15 minutes I think too that it could be a better solution but management people over here are quite deadlocks paranoiac ! I am pretty sure they won't agree with such solution, especially now, when the problem has been worked around...

    Also, I am leaving this company very soon and won't have time to prove them than this solution is better. And honestly I don't want also to change too much their way of working just before I leave.

    But I heard what you say and won't forget it when I'll face the same situation (and I am sure I will ๐Ÿ™‚ )

    Thanks for your advices.

  • Deadlock and update stats???

    The only caveat is disk and cpu. Stats don't take any locks (unlike indexes rework).

  • I tried a couple of things and you are right Update statistics doesn't prevent access to data (read, update or write).

    However it does hold some locks at the table level but I am not sure at which level (schema lock I guess)

    JourOBJECTSch-SUPDATE STATISTICSTRANSACTION

    JourOBJECTSch-SUPDATE STATISTICSTRANSACTION

    JourOBJECTSch-SUPDATE STATISTICSTRANSACTION

    JourOBJECTSch-SUPDATE STATISTICSTRANSACTION

    JourOBJECTSch-SUPDATE STATISTICSTRANSACTION

    JourOBJECTSch-SUPDATE STATISTICSTRANSACTION

    JourOBJECTXUPDATE STATISTICSTRANSACTION

    Once again I learned something today, many thanks ! ๐Ÿ™‚

  • Well ya the process will lock the table so the schema doesn't change or the table isn't dropped. I'm not locks expert but that shouldn't prevent much to run!

    From http://www.sqlteam.com/article/introduction-to-locking-in-sql-server

    Schema locks (Sch)

    There are two types of schema locks:

    โ€ขSchema stability lock (Sch-S): Used while generating execution plans. These locks don't block access to the object data.

    โ€ขSchema modification lock (Sch-M): Used while executing a DDL statement. Blocks access to the object data since its structure is being changed.

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply