Memory

  • Thank you.

    I will mark this post as a bookmark for my reference of memory option

    So in my case, this server is 64 bits, and only 4 gb memory, it hosts quite a few transaction databases.

    So do you think one thing I need to do is to add memory to this server to 8 or 12 gb.

    and after that, I can setup lock page in memory, and set Max.

    For the lock page in memory, what account I should use? The account that runs sql services?

    By the way, I read from this article: http://msdn.microsoft.com/en-us/library/ms178067.aspx

    it says: Allowing SQL Server to use memory dynamically is recommended;

  • sqlfriends (6/11/2011)


    By the way, I read from this article: http://msdn.microsoft.com/en-us/library/ms178067.aspx

    it says: Allowing SQL Server to use memory dynamically is recommended;

    You are not wrong in that it is Microsoft recommended.

    Jeffrey Williams-493691 (6/10/2011)


    Brandie Tarvin (6/9/2011)


    Again, it depends, but I believe the blanket recommendation is to let SQL Server figure out memory allocation for itself unless you are having problems.

    Too many people don't understand what they are doing when they fiddle with memory settings and usually make a bad situation worse, or a good situation bad.

    You have to be careful with this blanket statement. On x86 hardware - that was valid, but on x64 hardware it really isn't valid anymore. If you are on x64 with only 4GB of memory and SQL Server needs more memory - it could take it all and starve the OS.

    Besides your personal experience, could you point out a Microsoft article that proves I am wrong with my "blanket statement" that Microsoft recommends dynamic memory allocation?

    Otherwise, it seems we have a case of Pot and Kettle, and you're the Pot.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • What looks to me is yes, you need more memory. The target allocations tend to show more than the current allocation when SQL needs more memory.

    Please monitor these perfmon counters and post the average over a sluggish SQL response time:

    Page Life expectancy

    Pages/sec

    SQL total Server memory and SQL Target server memory

    Buffer cache hit ratio

    At last of the monitoring period, collect the output of DBCC MEMORYSTATUS. (http://support.microsoft.com/kb/907877)

  • OK, I will set it up and post the result tomorrow.

    And for the last statement:At last of the monitoring period, collect the output of DBCC MEMORYSTATUS

    This cannot be included in Perf monitor, shall I setup a sql job of doing that?

    Thanks

  • sqlfriends (6/14/2011)


    And for the last statement:At last of the monitoring period, collect the output of DBCC MEMORYSTATUS

    This cannot be included in Perf monitor, shall I setup a sql job of doing that?

    You can try, but I have no idea how that will affect the output. You might want to test it in SSMS and via a job on a Dev server to see if you get the same information from both.

    How do you plan on capturing the information in the job?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I setup the job using T-SQL and output to a file. I tried, it runs fine.

    But the problem for my case is I already found some sql jobs are delayed for serveral minutes during the peak hour in the morning automatically by the system, it may because of the some blocking process using resources at that time and pending other jobs. SO this job could be delayed too, to a later time the resource been release to it. So not sure it will capture the memory status at exactly the peak time.

    So what else can I capture it other than manually run the DBCC memorystatus?

    THanks

  • Is the RAM utilization high all the time , or only during particular times during the day? 4 Gb usually works well for most OLTP systems hosting few databases of moderate workload.

    However if your performing some ETL etc on the same server you might want to revisit the code to see if they can be optimized to consume less memory.

    Jayanth Kurup[/url]

  • sqlfriends (6/14/2011)


    But the problem for my case is I already found some sql jobs are delayed for serveral minutes during the peak hour in the morning automatically by the system, it may because of the some blocking process using resources at that time and pending other jobs. SO this job could be delayed too, to a later time the resource been release to it. So not sure it will capture the memory status at exactly the peak time.

    So what else can I capture it other than manually run the DBCC memorystatus?

    There is no other way.

    Once you've done your perform stuff, and DBCC MemoryStatus manually (because really that's the best way if you're having these issues), set up a Server Side Profiler Trace to run on the server the next day at about the same time. This will assist you with identifying problem code that may be hampering the server.

    Here's a trace template recommended by Andy Warren for this sort of thing.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Thank you , I will looking into the attachment and links.

    I attached the perf counter I colllected this morning. See attachment, please let me know how you read them and analyze them.

    For DBCC check status, I just found out it didn't run for I scheduled for tomorrow. Sorry about that.

    I also scheduled a profiler tomorrow morning.

  • I attach another copy of .csv file

    The perf run from 6:00 am to 6:10 am. Then after that around 6:11, I got some blocking message, that says master database is suspended. but when I check, the status now has changed.

  • I attach another copy of .txt file

    The perf run from 6:00 am to 6:10 am. Then after that around 6:11, I got some blocking message, that says master database is suspended. but when I check, the status now has changed.

  • The buffer cache hit ratio looks pretty healthy (above 99% avg) but page life expectancy is way too low. This is indicating there must be very high physical reads for which data is being pushed to buffer pool. It might be very well because of some missing indexes.

    Please have a look at the following post

    [/url]

    You need to target the queries causing most number of physical reads and once those are tuned, take another set of perfmon counter. Just add another counter to list and that's avg disk sec / transfer.

  • I pull a report from SQl standard report just now around 1:00 pm, the report is high CPU top queries, and found the database called Discovery which is the our network managing software database that are on the top list.

    I attached it, can someone help to analyze a little bit. I see it has big number of reads, but I cannot see actual queries, and the important part I catch is the compilation time is at 6:03 am, the report is generated is at 1:00 pm. that is the time that we were experiencing slow reponse time and that also delayed our other transaction backup log to ten minutes later.

    Does that mean the compilation makes the slow down?

    thanks

  • Why do you think this is high CPU? Can you find it in %?

    Run this query and it will give you the picture since you last restarted SQL services. Having said this, the best way to find out your system bottleneck is using SQLDiag to collect the data and analyze it.

    SELECT TOP 100

    [Database_Name] = db_name(st.dbid),

    [Object_Name] = object_name(st.objectid),

    creation_time,

    last_execution_time,

    total_cpu_time = total_worker_time / 1000,

    avg_cpu_time = (total_worker_time / execution_count) / 1000,

    min_cpu_time = min_worker_time / 1000,

    max_cpu_time = max_worker_time / 1000,

    last_cpu_time = last_worker_time / 1000,

    total_time_elapsed = total_elapsed_time / 1000 ,

    avg_time_elapsed = (total_elapsed_time / execution_count) / 1000,

    min_time_elapsed = min_elapsed_time / 1000,

    max_time_elapsed = max_elapsed_time / 1000,

    avg_physical_reads = total_physical_reads / execution_count,

    avg_logical_reads = total_logical_reads / execution_count,

    execution_count,

    SUBSTRING(st.text, (qs.statement_start_offset/2) + 1,

    (

    (

    CASE statement_end_offset

    WHEN -1 THEN DATALENGTH(st.text)

    ELSE qs.statement_end_offset

    END

    - qs.statement_start_offset

    ) /2

    ) + 1

    ) as statement_text

    FROM

    sys.dm_exec_query_stats qs

    CROSS APPLY

    sys.dm_exec_sql_text(qs.sql_handle) st

    WHERE

    Object_Name(st.objectid) IS NOT NULL

    --AND st.dbid = 2

    ORDER BY

    db_name(st.dbid),

    total_worker_time / execution_count DESC

  • Why do you think this is high CPU? Can you find it in %?

    This report is called Top queries in total CPU time in SQL standard report.

    For SQLDiag, is it an out of box tool for sql server, does it have more overhead than running performance monitor or profiler?

    Thanks

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply