Executing a process with a specific set of CPUs

  • I am running MS SQL Server 2000 on a machine with 4 CPUs.  There are several databases installed on this server.

    One of the databases has an application that creates several threads.   The problem is that this one application and its threads use up all of the CPU processing power. 

    Is there any way to limit the execution or "bind" the application to 2 or 3 of the CPUs for this one application?

    Thanks in Advance

    ersonName>Eric PetersonersonName>

    epeterson@sbcsystems.com

     

  • Eric, take a look at the affinity mask Option settings.  


    "Keep Your Stick On the Ice" ..Red Green

  • I looked at the affinity mask option, but I am not sure that it will do what I am looking for.   My environment is truly mixed workload.   The batch folks kick off big processes that can bring the system to its knees, while the online folks need  response time.   If I could limit my batch users to 2 or 3 of the CPUs, I would make everyone happy.  With this in mind if I set the affinity mask option will limit threads to 3 CPUs, How would I tell SQL Server that I need to assign the online threads to a specific cpu and the batch ones to another? 

  • I do not know... I wasn't even sure that the affinity mask would work for you.  I have not worked with it and do not know a lot about it's uses.   All I was trying to do was give a potential area to look into.

     

    I do have a question for you..  What is your max degree of parallelism set to?


    "Keep Your Stick On the Ice" ..Red Green

  • Looks like that option can dis-clude a CPU from SQL Server altogether. 

    It looks "instance" helpful.  Like, you can say your production instance uses cpu 1 2 and 4 and your development instance (same machine) can use CPU 3.  something like that.

    but its not what youre looking for.  I've heard people ask about this several times over the years, but have never read anything concrete about it.  That's why I followed up with this post.  I like to have that knowledge in the ole treasure chest.

    ..

    A'ight.

  • Well I was hoping that this was somthing that Microsoft had added. 

    Especially considering that the MS code was originally based on the Sybase code line and the fact that Sybase has this functionality.  With Sybase Version 11 and after, you can bind CPUs to a processing group and assign specific logins/groups to each processing group.  

    Maybe Yukon will add somthing like this

    EP

     

  • Eric, you may want to check MAXDOP query hint.

    But to use it you need to re-write all your batch queries.

    Igor

  • Good idea, but I already have the max degree of parallelism set to 3. 

    But thanks, I didnt know about that option and will have to remember it for the future...

    EP

  • Guess if you have the money you could look at a cluster of servers. One for online stuff and one for batch.

    DN

  • Looks like we are going to do the two server route.

    On the brighter side, this will give me an excuse to play around with replication features and should give me a warm stand by database if the online server goes down.....

    EP

     

  • A lot of sites use this type of solution. Two servers one for online, the second warm backup for reporting/ batch/redundancy.

    If you have a SAN (Storage array Network) or shared disk you can use a couple of tricks to have the online DB running. Then have anotehr copy of SQL looking at the same files in readonly mode. This allows you to online batch processing/reporting.

    DN

  • I agree.  

    Too bad that both of my user groups need current data with little latency between both databases.   

    EP

     

  • I have found that on large systems that setting the max degree of parallelism to 1 can improve the overall performance of the system.  This was done on our SAP system and what I noticed is that everything improved except for the larger processes which declined in performance.  So am currently investigating the setting of the cost threshold to compensate. 

    Other things you may look into are

    1) Tuning the queries/database

    2) Rescheduling batch processes to different times

    3) Using the 64-bit version of SQL Server

    4) increasing the min memory per query (KB) in 1024 increments.


    "Keep Your Stick On the Ice" ..Red Green

  • How much latency are you entitled?  Log shipping comes to mind as well.   What bums me out is that I am starting to see more and more of this transactionally replicated reporting server in companies.  It seems to be a work around to the limits of SQL Server, or the symptom of poor physical and logical design.

    Eric, I would go for hardcore system, application, and database tuning before creating the reporting server.  But this is just me. 

     

    We have a SAN and I can create a Standby database, regardless of size, in seconds.  Nice feature. 


    "Keep Your Stick On the Ice" ..Red Green

  • Jeff

    It would be nice to re-design the database, but it was designed before I got here, and I cannot pitch to management that we need to re-design an existing application when there are other work arounds.  So if I cannot fix the source of the problem......

    I have been down the tuning path.   Luckily online app uses alot of stored procedures, which I have been really tuned. 

    The server is really running very smoothly, except that the number of concurrent batch processes will eat up all of the CPU cycles.   And this only happens on large processing days ( about 10% of the weekdays ) And withoug MS allowing me to restrict these users to specific CPUs, i have to split the processing to another machine, and let my online users deal with the latency from the batch systems, which for them is a good trade off because the online people live and die by response time.

    Thanks for all of your suggestions.

    EP  

     

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply