SQL Profiler - Excessive Reads?

  • Hi

    I'm onsite with a customer today and it’s the first time I've been able to look at their DB. They are a small organisation with no more than 10 users accessing the DB

    After running profiler for 15min over one of their main SQL 2000 DB's I've noticed that there is quite a lot of logical reads for the time span I ran profiler. I only selected the RPC:Completed & SQL:BatchCompleted events for this trace.

    Below is what I have in the trace as a total for 15min - Would anyone agree that these reads are excessive?

    CPU = 19,429

    Reads = 1,744,203

    Writes = 665

    Duration = 32,321

    I’ve still got a lot of testing to do but considering the number of users and short time span I did not expect such a high number of reads.

  • Colin Hamilton (5/28/2009)


    Hi

    I'm onsite with a customer today and it’s the first time I've been able to look at their DB. They are a small organisation with no more than 10 users accessing the DB

    After running profiler for 15min over one of their main SQL 2000 DB's I've noticed that there is quite a lot of logical reads for the time span I ran profiler. I only selected the RPC:Completed & SQL:BatchCompleted events for this trace.

    Below is what I have in the trace as a total for 15min - Would anyone agree that these reads are excessive?

    CPU = 19,429

    Reads = 1,744,203

    Writes = 665

    Duration = 32,321

    I’ve still got a lot of testing to do but considering the number of users and short time span I did not expect such a high number of reads.

    The number of reads depends on how much data has to be accessed in order to answer the queries. So the number of users doesn't tell that much. It would be more interesting to know how large is the database, are the queries causing tablescans or are indexes efficiently used.

    If you do a "SELECT * FROM table" without a WHERE clauese on a 13 Gb table you get about 1.7m reads with just one query.

    [font="Verdana"]Markus Bohse[/font]

  • Well after looking at some of the query plans there seems to be lots of table scans going plus one table in particular is about 8GB in size and is accessed quite a bit so that possibly will account for the high number of reads like you said Markus.

    The DB is about 18GB in size but it looks like there are quite a bit of missing indexes and fragmentation on the disks. Looks like I'll be spending a few more days here 🙁

    Cheers for the reply Markus.

  • Depends on the number of queries that ran during that 15 min period. That no of reads could be excessive or they could be very low.

    The total reads doesn't really tell you anything, what you need to find is the queries with the highest total reads, as that will tell you what queries need optimising.

    This may help: http://www.simple-talk.com/sql/performance/finding-the-causes-of-poor-performance-in-sql-server,-part-1/

    I'm working with a trace set at the moment where, in one hour, the total reads are over 450 million, so that would work out to 150 million reads in 15 min.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply