Insert 2k rows in one second

  • Hi , 
    I have a bourse application that in beginning of the day , service application get 2K messages that this messages is text and must be parsed. when parsing messages, every message split into 3-4 or 5 parts and must be inserted into related tables. My problem is when I received 2K messages , when parsing then I have about 8K to 10K insert statements that must be inserted separately into SQL Server, because I get only one message in each connect to web service , and in my test , I can get 2K messages in one second. but I can parse this messages and insert into database only 200 to 250 messages ( 200*4= 1000 insert statement).
    My server is virtual and have 32gb ram and CPU 2.4 GHZ  and HDD is SSD . 
    My database files configuration is OK and  and I don't have any high wait in my SQL Server.
    In another test, with a local PC that have 16 GB ram and a Core i7 3.6 GHZ with SATA , I can insert 600-800 messages into database in one second.  (600*4 =2400 insert statement ) .
    But I don't understand why?

  • The original title of this post was "Insert 2k Rows in 2 seconds".  I'm curious as to what problem you had and why you deleted your question.  Did you come up with a solution?  Frequently, folks can actually help improve any solution you may have come up.  You shouldn't short-change yourself by withdrawing the question.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Thanks for your hint. 
    I update my question and new state of my problem.

  • Hamid-Sadeghian - Saturday, September 22, 2018 1:12 PM

    Hi , 
    I have a bourse application that in beginning of the day , service application get 2K messages that this messages is text and must be parsed. when parsing messages, every message split into 3-4 or 5 parts and must be inserted into related tables. My problem is when I received 2K messages , when parsing then I have about 8K to 10K insert statements that must be inserted separately into SQL Server, because I get only one message in each connect to web service , and in my test , I can get 2K messages in one second. but I can parse this messages and insert into database only 200 to 250 messages ( 200*4= 1000 insert statement).
    My server is virtual and have 32gb ram and CPU 2.4 GHZ  and HDD is SSD . 
    My database files configuration is OK and  and I don't have any high wait in my SQL Server.
    In another test, with a local PC that have 16 GB ram and a Core i7 3.6 GHZ with SATA , I can insert 600-800 messages into database in one second.  (600*4 =2400 insert statement ) .
    But I don't understand why?

    I think the answer is going to come down to the code you're using as well as what the target tables.  Can you post some of the example data and the CREATE TABLE statement for the target tables along with any constraints and indexes which may be on the table?  It would also be helpful if you posted the code that you're using. 

    Also, are you saying that you're parsing the text data in the application and then trying to insert it into SQL Server tables one row at a time?

    Last but not least, I don't know what a "bourse application" is.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • for example such as this link 

    Insert statement is very simple.
    tables has a primary key and only 2-7 columns

    for each table I have a SP that get parameters and contains an insert statement with output clause to return identity value.

    Each table also contain an Identity value that is primary key .
    sample message is such as below :
    223875 a1 1397/07/01 258874.13 testdata 1287749 a1record samplefield ,

    I have to split this messages and calling SP to insert into tables. 
    One important thing is that insert must sequential and cannot be paralleled.

  • Most likely the issue is on the application side and how the data is being split/processed.
    Adding to the fact that you said you could not do parallel processing this is only going to add to the time it takes to process. 

    In order to us to be able to help we would need

    • Application code related to receiving messages, splitting, calling proc
    • Stored procs being called
    • Reason why process can't be parallel
    • As you mentioned that the proc is returning the identity value of the records being inserted we also need to know what you are doing with it.
    • server specs of the server doing the message processing. Including load stats (CPU, Memory, Network usage)

    As for the server being slower than your local PC there are lots of things that can affect this.

    • VM type (Vmware, Hyper-V)
    • VM Configuration (number of cores presented, number of vCpu per cores)
    • VM Disk type - pass through, vmdk, SAN
    • Is VM over allocated (both CPU and Memory)
    • Is VM and Host OS and Host Bios set to high performance
    • If Vmware - is there CPU Ready waits? is there memory Ballooning?
    • Latency on calls to VM

    Possibly even others.

    But I would start with possible code changes - things that seem logical to build in a particular way at first not always perform well and a rewrite may be required for parts of the process. Sometimes its a small thing, others is a major rewrite, but without the code its impossible to say.

  • Hamid-Sadeghian - Sunday, September 23, 2018 12:00 PM

    for example such as this link 

    Insert statement is very simple.
    tables has a primary key and only 2-7 columns

    for each table I have a SP that get parameters and contains an insert statement with output clause to return identity value.

    Each table also contain an Identity value that is primary key .
    sample message is such as below :
    223875 a1 1397/07/01 258874.13 testdata 1287749 a1record samplefield ,

    I have to split this messages and calling SP to insert into tables. 
    One important thing is that insert must sequential and cannot be paralleled.

    So there's nothing in the data that could be cribbed for the correct sort order? :blink:  How are you maintaining the correct sort order on the application side of the house?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • frederico_fonseca - Sunday, September 23, 2018 12:56 PM

    Most likely the issue is on the application side and how the data is being split/processed.
    Adding to the fact that you said you could not do parallel processing this is only going to add to the time it takes to process. 

    • Application code related to receiving messages, splitting, calling proc
    • Stored procs being called
    • Reason why process can't be parallel
    • As you mentioned that the proc is returning the identity value of the records being inserted we also need to know what you are doing with it.
    • server specs of the server doing the message processing. Including load stats (CPU, Memory, Network usage)

    • VM type (Vmware, Hyper-V)
    • VM Configuration (number of cores presented, number of vCpu per cores)
    • VM Disk type - pass through, vmdk, SAN
    • Is VM over allocated (both CPU and Memory)
    • Is VM and Host OS and Host Bios set to high performance
    • If Vmware - is there CPU Ready waits? is there memory Ballooning?
    • Latency on calls to VM

    Possibly even others.

    But I would start with possible code changes - things that seem logical to build in a particular way at first not always perform well and a rewrite may be required for parts of the process. Sometimes its a small thing, others is a major rewrite, but without the code its impossible to say.

    One important think is that I cannot change the business of the application right now. Because this application written by another teams and has very cost for me to change that. also in my long time plan , I wanna change the business of parse messages and decrease inserts.
    One thing that I don't know why is with higher cpu power, in VM I can parse and insert more messages about 2x from 250 to 500-600 messages.
    This message must be sequential because in each insert I get an ID that must be used to insert in another table and because I have a little time to operational this system, I cannot very change in my business and code.

  • Jeff Moden - Sunday, September 23, 2018 4:42 PM

    Hamid-Sadeghian - Sunday, September 23, 2018 12:00 PM

    for example such as this link 

    Insert statement is very simple.
    tables has a primary key and only 2-7 columns

    for each table I have a SP that get parameters and contains an insert statement with output clause to return identity value.

    Each table also contain an Identity value that is primary key .
    sample message is such as below :
    223875 a1 1397/07/01 258874.13 testdata 1287749 a1record samplefield ,

    I have to split this messages and calling SP to insert into tables. 
    One important thing is that insert must sequential and cannot be paralleled.

    So there's nothing in the data that could be cribbed for the correct sort order? :blink:  How are you maintaining the correct sort order on the application side of the house?

    sort order is with IDs that is identity in my tables.

  • This is raw message sample

    "MessageCode":"0053","InstrumentId":"IRB3AL010251","Message":"AdAIaEwAAAACAAAAAluoTpFheHh4eHh4eDEwMDM2MiBOT1oxSVJCM0FMMDEwMjUxQUwxUTEyMDE4MDgwNTE2MzQ1NDAwNTMwMDAwMDFBbGJvcnpOci5JSlIyMzA4MDUzMDYwSVJOMjAwMDAxMDAwMDAwMDAgICAwMDAwMDEwNTAwMDBBTDAxIMrM5e3Sx8og5MfmkMfkINHt4e0gx+HI0dLk7dHmIDAwMDAwMCAgIDAwMDAwMDAwMCAgMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDIwMTgwODA1MDBBMDAwMDAwMCAgICAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAxWjEgICAwMDAwMDAwMDAwMDBJUiBJUlIyMDE4MDgwNCAwMDAwMDAgIDEwMDAwMDAwMDIwMDAwMTAwMDAwMDAwICAgICAgICAgICAgIDIwMDAwMTAwMDAwMDAwMjAwMDAwMDAwMDAxMDDH4cjR0jAyMSAgICAgICAgICAwIDAwICAwMDAwMDAwMDAwMDAwQUwwMTAyICAgICAgICAgICAgICAgICAgICAgICAgICAgIDAwMDAwIDAwMDAwMDAwMDAwMDAgIDAwMDAwMDAwMDAwMDEgMDAwMDAwMDDHzMfR5SDR7eHtIMfhyNHS5O3R5jE0MDIwNTE0ICAgICAgICAwMDAwMDAwMCAgICAgICBJUkIzQUwwMTAyNTggICAgICAgICAgICAgICAgICAwMDAwMDAwMDAwMDAwMDAwMCAgICAgICAgICAgICAwMDAwMDAwMDAwMDEgMDAwMDAwMDAwMDAwMCAgICAgICAgICAgMDAgMDAwMDAwMDAwMDAwMCAgIDAwMDAwMDAwMDAwMCAgMDAwMDAwMDAwMDAwMDAwICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgRSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAwMCAyIDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMCAgIDAwMDAwMDAwMDAwMTAwMDAwMDAyNTAwMCAwMDAwMDAwMCAgMDAwMDAwMDAwMDAwMDIwMDAwMTA1MDAwMDAwMjAwMDAwOTUwMDAwMDAwNU8gICAgMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIE5PIDAgNDY5IDY5MTAzMCAgMDQgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDAwMDAxMDAwMTY5ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMDAwMDAwMDI1MDAwMDAwMDAwMDI1MDAwMjAwMDAxMDAwMDAwMDBGMDAwMDAwMDAwMDAxIDAwICAgICAgICAgICAgMDAwMDAwMDAwMDAwMDAwICAgMDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMCAgAA==","MessageVersionId":"267000000000002"}

    and this is parsed message smaple

    {"LRLCDIFFHeader":{"Tech_head_Type":1,"ItemCode":2256,"SessionNumber":19560,"ABSMessageNumber":150994944,"MessageNumberForItemCode":150994944,"BroadCastTimestamp":2437851227,"TransMitterSignature":8680820740569200737,"InstumentCharacteristicHeaderType":"1","MarketFeedCode":"00","MarketPlaceCode":"362","FinancialMarketCode":" BK","CIDGrc":"Z4","InstrumentID":"IRB3BK0199C4","CValMNE":"BK1Q4","DEven":"20180721","HEven":"160945","MessageCodeType":"0053","SEQbyINSTandType":"000001"},"InstrumentID":"IRB3BK0199C4","MessageCodeType":"0053","Message":{"ACarValRLC":{"LVal18":"BG.KaraneMRB210312","YVal":"306","SVal":"0","CPyEmet":"IRN","QNmVlo":{"IFt":"2","QMt":"0000100000000"},"CDevNm":" ","ZTitAd":"000000500000","CSocCSAC":"BK01 ","LSoc30":"?????? ??? ???? ?????991222 ","ASecEco":{"CGdSecEco":"00","CSecAtvEco":"00","CSecCompAtv":"00","Filler01":" "},"CEtFnc":"00000000","IValMry":"0","CSegMarEnvLco":" ","XIntOblRte":{"IFt":" ","QMt":"0000000000000"},"YTxInt":" ","DEmisObl":"00000000","DPrRgLivVal":"00000000","DDebAm":"00000000","DFinAm":"00000000","DESop":"20180721","YOPSJ":"00","CGdSVal":"A","CSicoRGAChn":"000000","CQtmJoui":"0","CDevPEmis":" ","PEmis":{"IFt":" ","QMt":"0000000000000"},"YRbAnt":"0","YRbPro":"0","IValAdSrd":" ","DEchPretEmp":"00000000","IDtVote":"0","IDemat":"0","ITpi":"0","ILcoVwap":"1","CGrValCot":"Z4","Filler001":" ","AClasCo":{"NDivCo":"0","NRubCo":"000","NOrdCo":"0000000"},"IMarche":"0","CPyCot":"IR ","CDevCot":"IRR","DInMar":"20170311","CCpmLco":" ","CSocEmetVal":"000000","CIdxPasCotVarVal":" ","YUniExpP":"1","DDrCV":"00000000","PDrAjSajCotV":{"IFt":"2","QMt":"0000100000000"},"Filler002":"     ","PDrAjCotV":{"IFt":"2","QMt":"0000100000000"},"QPasCotFxeVal":{"IFt":"2","QMt":"0000000000100"},"LVal18AFC":"?????994    ","YTitVal":"0","YAdTitNg":" ","OCertifVAL":"00","Filler003":" ","QNmVal":{"IFt":" ","QMt":"0000000000000"},"CAFCValObjMsg":"BK0199","CAFCValSjaWar":"  ","IYMdvVal":" ","Filler004":"       ","IApaValRlISB":"0","IApaValRlSbi":"0","IApaValRlGrgr":"0","IApaValRlDn":"0","IApaValRlGar":"0","QNorCpxBlcFmp":{"IFt":" ","QMt":"0000000000000"},"Filler005":" ","SRbEmp":"0","QQtTranMarVal":"000000000001","Filler006":" ","DPrEch":"00000000","Filler007":"?????? ??? ???? ?????991222   ","ASbAniVal":{"CResSbAniVal":"00000","CSbAniVal":"000"},"Filler008":" ","YApaValIdxRLC":" ","Filler009":"  ","CIsin":"IRB3BK0199C0","CValIsinChn":"    ","Filler0010":"  ","TDeDn":" ","YTrtTitSico":"0","DAdValDep":"00000000","DRadValDep":"00000000","CFfoDepVal":"  ","AClsEcoFtseSoc":{"CGdSecEcoFtse":" ","CSecEcoFtse":" ","CSoSecEcoFtse":" "},"Filler0011":" ","ZNorTitBlcNg":"000000000001","Filler0012":" 0000000000000    ","CFlmVal":"00","QNmMarValVlo":{"IFt":" ","QMt":"0000000000000"},"CDevNmMarValIso3A":" ","ZTitCirObl":"000000000000","Filler0013":"0 000000000000000    "},"DHDebCotProdMdv":"     ","DHFinCotProdMdv":"     ","IValiOmIns":"E","CProdCpsProdYCbn":[null,{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "},{"Value":"    "}],"AKProdCpsProdYCbn":[null,{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"},{"CSignKMuProdCps":" ","KMuProdCpsProdCbn":"00"}],"YExpPValMdvAdf":" ","YExpVarPValDrPRf":"2","QTickValMdv":{"IFt":" ","QMt":"0000000000000"},"PExoProdMdv":{"IFt":" ","QMt":"0000000000000"},"CDevPExoProdMdv":" ","QTitMinSaiOmProd":"000000000001","QTitMaxSaiOmProd":"000000030000","ICaVarPJDrPRf":" ","DSeaPCompValMdv":"00000000","YPCompValMdv":" ","PCompValMdv":{"IFt":" ","QMt":"0000000000000"},"PSaiSMaxOkValMdv":{"IFt":"2","QMt":"0000105000000"},"PSaiSMinOkValMdv":{"IFt":"2","QMt":"0000095000000"},"ZMaxLimDifVal":"05","YAppaValMdv":"O","YQStg":" ","PCpsDrvObl":{"IFt":" ","QMt":"0000000000000"},"XDtaStg":{"IFt":" ","QMt":"0000000000000"},"YStg":" ","YCreValMdv":" ","CIdAdfCreValMdv":"   ","CIdNgCreValMdv":"   ","CIsinProdSja":"    ","YMarNSC":"BK","YExoFamProdYOpt":" ","ISupOmCrn":"0","IEtaPcsOl":" ","CComVal":"4","CSecVal":"69 ","CSoSecVal":"6911","YDeComp":"3","IProdIsl":"0","Filler1":" 0","IVtDec":"4","LPra":"    ","Filler2":"    ","Filler3":"      ","CIsinForeign":"    ","LocForeignIndicator":"0","CIndustryValICB":"0001","CSuperSecValICB":"0001","CSecValICB":"69 ","Filler4":" ","StSplitAgg":{"QCurStSplit":"    ","QNewStSplit":"    "},"StDivAgg":{"QCurStDiv":"    ","QAddStDiv":"    "},"QTitMaxSaiOmBuy":"000000030000","QTitMaxSaiOmSell":"000000030000","Filler5":"20000100000000F000000000001 "}}

  • Hamid-Sadeghian - Sunday, September 23, 2018 10:03 PM

    frederico_fonseca - Sunday, September 23, 2018 12:56 PM

    Most likely the issue is on the application side and how the data is being split/processed.
    Adding to the fact that you said you could not do parallel processing this is only going to add to the time it takes to process. 

    • Application code related to receiving messages, splitting, calling proc
    • Stored procs being called
    • Reason why process can't be parallel
    • As you mentioned that the proc is returning the identity value of the records being inserted we also need to know what you are doing with it.
    • server specs of the server doing the message processing. Including load stats (CPU, Memory, Network usage)

    • VM type (Vmware, Hyper-V)
    • VM Configuration (number of cores presented, number of vCpu per cores)
    • VM Disk type - pass through, vmdk, SAN
    • Is VM over allocated (both CPU and Memory)
    • Is VM and Host OS and Host Bios set to high performance
    • If Vmware - is there CPU Ready waits? is there memory Ballooning?
    • Latency on calls to VM

    Possibly even others.

    But I would start with possible code changes - things that seem logical to build in a particular way at first not always perform well and a rewrite may be required for parts of the process. Sometimes its a small thing, others is a major rewrite, but without the code its impossible to say.

    One important think is that I cannot change the business of the application right now. Because this application written by another teams and has very cost for me to change that. also in my long time plan , I wanna change the business of parse messages and decrease inserts.
    One thing that I don't know why is with higher cpu power, in VM I can parse and insert more messages about 2x from 250 to 500-600 messages.
    This message must be sequential because in each insert I get an ID that must be used to insert in another table and because I have a little time to operational this system, I cannot very change in my business and code.

    Look at the OUTPUT Clause.  That will allow you to get multiple IDs from a single insert, and then use them to join to the original dataset for the child inserts.

  • DesNorton - Monday, September 24, 2018 7:28 AM

    Hamid-Sadeghian - Sunday, September 23, 2018 10:03 PM

    frederico_fonseca - Sunday, September 23, 2018 12:56 PM

    Most likely the issue is on the application side and how the data is being split/processed.

    • Application code related to receiving messages, splitting, calling proc
    • Stored procs being called
    • Reason why process can't be parallel
    • As you mentioned that the proc is returning the identity value of the records being inserted we also need to know what you are doing with it.
    • server specs of the server doing the message processing. Including load stats (CPU, Memory, Network usage)

    • VM type (Vmware, Hyper-V)
    • VM Configuration (number of cores presented, number of vCpu per cores)
    • VM Disk type - pass through, vmdk, SAN
    • Is VM over allocated (both CPU and Memory)
    • Is VM and Host OS and Host Bios set to high performance
    • If Vmware - is there CPU Ready waits? is there memory Ballooning?
    • Latency on calls to VM

    Possibly even others.

    But I would start with possible code changes - things that seem logical to build in a particular way at first not always perform well and a rewrite may be required for parts of the process. Sometimes its a small thing, others is a major rewrite, but without the code its impossible to say.

    One important think is that I cannot change the business of the application right now. Because this application written by another teams and has very cost for me to change that. also in my long time plan , I wanna change the business of parse messages and decrease inserts.
    One thing that I don't know why is with higher cpu power, in VM I can parse and insert more messages about 2x from 250 to 500-600 messages.
    This message must be sequential because in each insert I get an ID that must be used to insert in another table and because I have a little time to operational this system, I cannot very change in my business and code.

    Look at the OUTPUT Clause.  That will allow you to get multiple IDs from a single insert, and then use them to join to the original dataset for the child inserts.

    Yes I know. but my problem is sequentially is very important for me. every row that insert in each table ,return an ID value with output. and using this output to insert another row into another tables. 
    My big problem is that I cannot change the business of application right now and must be operational very soon.

  • 1.    VM type (Vmware, Hyper-V): VMware
    2.    VM Configuration (number of cores presented, number of vCpu per cores): 2 Sockets, 8 Cores per Socket
    3.    VM Disk type - pass through, vmdk, SAN: vmdk
    4.    Is VM over allocated (both CPU and Memory)
    Maximum VM CPU utilization for 1 week is less than 25 percent and VM free memory is 11 GB at worst case. Memory are reserved at host and there is no contention, there is a 8% CPU contention for host at worst case
    5.    Is VM and Host OS and Host Bios set to high performance
    Host set to high performance mode but VMs performance settings are default
    6.    If Vmware - is there CPU Ready waits? is there memory Ballooning?
    There is 0.05% CPU ready waits at worst case(8:30 AM) , there is no memory ballooning because of memory reservation
    7.    Latency on calls to VM:
    If latency on calls means disk latency there are two meters:
    Maximum disk read latency= 0.81 ms
    Maximum disk Write latency= 1.4 ms

  • Hamid-Sadeghian - Wednesday, September 26, 2018 4:40 AM

    1.    VM type (Vmware, Hyper-V): VMware
    2.    VM Configuration (number of cores presented, number of vCpu per cores): 2 Sockets, 8 Cores per Socket
    3.    VM Disk type - pass through, vmdk, SAN: vmdk
    4.    Is VM over allocated (both CPU and Memory)
    Maximum VM CPU utilization for 1 week is less than 25 percent and VM free memory is 11 GB at worst case. Memory are reserved at host and there is no contention, there is a 8% CPU contention for host at worst case
    5.    Is VM and Host OS and Host Bios set to high performance
    Host set to high performance mode but VMs performance settings are default
    6.    If Vmware - is there CPU Ready waits? is there memory Ballooning?
    There is 0.05% CPU ready waits at worst case(8:30 AM) , there is no memory ballooning because of memory reservation
    7.    Latency on calls to VM:
    If latency on calls means disk latency there are two meters:
    Maximum disk read latency= 0.81 ms
    Maximum disk Write latency= 1.4 ms

    With latency I meant the calls from your message processing server to SQL Server - and for that matter also from the server supplying the messages and the messaging processing server itself. 

    5 - High performance - set the VM to high performance also - this can have a significant impact on SQL Server processing.

    And as said before it is highly unlikely you will be able to get your processing up to your desired target without code changes. So good luck with it.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply