Peroformance Question

  • I always run into such situations. I have 6 databases's which are completely identical but the size varies, lowest is 350 gb and highest is 1.2TB but sometime observe that same query runs longer time in the small database but quicker in Bigger database. I did test this by building all indexes on all the databases but still the problem, what might cause the smaller database to run so slow when comapred to bigger database. There are no other processess running on the server when i was testing and all of these databases reside on single server.

    How would i find the problem area to fix this issue, any idea ?

  • If the tables are different sizes, all bets are off. Different cardinality (row count) estimates, and statistics might lead to a different query plan - SQL Server might transition to a hash join from a loop join, for example. It might also choose to invoke parallelism. Other factors that come into play include the proportion of accessed data and index pages that happen to be in cache at the time.

    In short, it could be any one of a number of things...if you want a more detailed analysis, post the query plans from two actual executions (*.sqlplan files).

  • Along with the query plans, include the query itself.

    One thing that could be a difference comes down to server hardware configuration. You may want to verify and re-verify that the server hardware is all exactly the same.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (4/7/2010)


    Along with the query plans, include the query itself.

    Included in the sqlplan file 😉

  • I am just asking in general,i dont have any query. the reason is some of the developers come to me saying that they ran a query 2 days back and it was running slow on smaller database but was fast on bigger database.

    As a sysadmin on the server, is there a way i can find

    1. who ran what queries in the past

    2. was there anything blocking them during that time

    3. how many users were using the same database

    4. what was the exact time they faced this problem by that i can check the amount of cpu/memory used at that time.

    all of these are in general not specific to any query. most interesting thing is the server which our develpers use is used just for reading the data, any of the databases on that server are doing any DML stuff.so i hope that there wont be any fragmentation of indexes. Does the problem solve if i do update statistics with full scan though the database is set for auto update statistics ?

  • You can try looking at the default trace file.

    Otherwise, you should set up some sort of auditing to determine that information.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Obviously there are a lot of things that this could be, but the first thing that comes to my mind when I read your original post is what exactly is different about the databases that makes the size vary so drastically. You said that they are identical, but that really can't be the case with the large difference in size.

    Obviously indexes add size to a database and if the larger databases have exactly the same table structures but there are a whole bunch more indexes (meaning the DB size would be much larger) then it would make sense that the queries would run faster.

    All that said, as mentioned above you can obviously get the plans for the query on each environment and I would definitely get the IO stats for them as well to help key you in on what is running differently. Something is doing a whole lot more reads and probably tables scans somewhere.

  • Looking back without relevant trace files in place during that time is probably hard to do (if possible at all).

    I would ask the developer to wrap that query into a sp and set up a trace looking specifically for locking information and stored procedure events (maybe even creating a user-defined event if there are may sp's running).

    Then I would run that trace on both databases and compare the results.



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • Here is an example i just created, attached are the sql plans of slow and fast running query.

    Please note that slowqry is running against 200gb database which takes 3 hrs and fastqry is running against 1.8TB database takes 6min.

    Both database's are completely identical except the volume of data.

  • The fast query plan file is damaged - you seem to have performed multiple search-and-replace operations on the XML - for example 'Ca' was replaced with 'USA'. Unintended replacements have broken it.

  • Here is what i observed, there is an inner join in the query which joins remote server but if run this query locally on that server it is taking only 5 sec. any reason ?

    but still my question as to running the same for other database faster is not answered.

  • Do you execute the slow query against a remote server?

  • Both queries i execute from the remote server.

  • Here is the situation in my area.

    Slow Qry = Server A > View on Server B.Database B > View will access Server C

    Fast Qry = Server A > View on Server B.Database BB > View will access Server C

    when i query directly from A to C for slow qry it runs in 2 sec but through View it takes 2hrs

    If i assume so how does Fast Qry runs in 5min via ServerB to Server C and more over the database is very large. This appears strange to me.

  • Now that you point out the Remote calls, I will say that I have seen some circumstances (all have been on SQL 2000 admittedly) where the optimizer seems to lose its mind when dealing with Linked server data, especially when that data is used within a join that contains many other tables remote or not.

    To resolve that, I ended up loading the data into a temp table on the local server, such that the remote query was no longer within a large join with other tables, but was instead isolated. To load the temp table took just a few seconds where I was seeing 30-40 minutes when the join to the remote server was included.

    I am not sure that this is your problem but it is certainly a possibility and may be worth trying if that code is easy enough to separate.

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply