Linked Server / Distributed Transaction Best Practice

  • I've come across a sproc in our system that does transactional calls to a linked SQL server. This all works fine, but the code is very messy, is all dynamic SQL because of the linked server name that is passed in as a parameter and attached to all the SQL statements (this sproc can be on several database servers, each with their own linked servers). Plus you are dealing with distributed transactions through the linked server.

    So, my thought was to create two sprocs - the 'sink' that does all the work and would reside on the target server, and the caller which resides on the source server and calls the sink. The code in the sink could be cleanly written and no longer requires any calls to sp_executesql. It can still use a transaction, but it would no longer be distributed (I think).

    The caller would take in a linked server name parameter, which it would then concatenate to the sink sproc name and then call out to the sink sproc on the linked server. Minimal code.

    Yes you now have 2 sprocs instead of one and they need to be installed on all servers that can call other severs, but it just seems more solid.

    Would this be a good practice? Has anyone dealt with distributed transactions with linked servers to know what is the best practice.

    Thanks in advance.

    Steve Kumbsky

    stevekumbsky@msn.com

    Steve Kumbsky

  • Steve Kumbsky (1/26/2012)


    I've come across a sproc in our system that does transactional calls to a linked SQL server. This all works fine, but the code is very messy, is all dynamic SQL because of the linked server name that is passed in as a parameter and attached to all the SQL statements (this sproc can be on several database servers, each with their own linked servers). Plus you are dealing with distributed transactions through the linked server.

    So, my thought was to create two sprocs - the 'sink' that does all the work and would reside on the target server, and the caller which resides on the source server and calls the sink. The code in the sink could be cleanly written and no longer requires any calls to sp_executesql. It can still use a transaction, but it would no longer be distributed (I think).

    The caller would take in a linked server name parameter, which it would then concatenate to the sink sproc name and then call out to the sink sproc on the linked server. Minimal code.

    Yes you now have 2 sprocs instead of one and they need to be installed on all servers that can call other severs, but it just seems more solid.

    Would this be a good practice? Has anyone dealt with distributed transactions with linked servers to know what is the best practice.

    As I read this the caller doesn't know what server to call until run time so has to build the commands dynamically. Unless you include all possible choices when you recode it you will still be required to use some dynamic sql. But this is certainly possible.

    One point I wanted to make is that anytime you issue a statement that changes data (i think even in cases where local data is NOT changed) it will be a distributed transaction. You should also keep in mind that the DB engine will instantiate a distributed transaction if it feels the need to OR to escalate and existing transaction into a distributed transaction as needed.

    As far as simply calling a sproc on the remote end I don't think that will create a distributed transaction but that is simple to test.

    To be honest I try REALLLLLLY hard not to use linked servers at all. Unless you needed this to be easily called by a SQL process I would probably wrap it up into an SSIS package and run it that way..

    CEWII

  • Elliott,

    Thanks for the reply!

    The caller sproc recieves the name of the linked server and some parameters to pass to the sink as a parameter and then the only contatentation done is prefixing the sproc name:

    @sprocToCall = @linkedServerName + '.[db].[owner].[sproc_name]' + parameters

    EXEC @sprocToCall

    That's it. The called sproc (sink) which used to have all the dynamic sql would no longer have that - one of the objectives of considering even going this route.

    Thanks for the info how SQL Server handles distributed transactions. I'm not that knowledge on this which

    is why i'm asking. I tried out my idea and it executed fine, though I don't know how to check to see if I've avoided any DTs or not.

    Anyway, I totally agree on NOT using linked servers, so with that then are there other choices? The basic operation here is to move some data from one database to another. This is user initiated as only selected data would be moved. The way the sytem works now, a sproc is fed some Ids and the linked server name and dynamic SQL runs the operations under a transaction.

    Obviously one option is to have the client handle loading the data and then saving it to another database. This involves a major refactor but could be done.

    If an SSIS package can do this more efficiently then maybe that is option I'll investigate.

    Thank you for taking some time to help me out.

    Steve

    Steve Kumbsky

  • Steve Kumbsky (1/26/2012)


    I tried out my idea and it executed fine, though I don't know how to check to see if I've avoided any DTs or not.

    In Administrative Tools you'll find Component Services, open that. In win 7 I have three items on the list when I do, the first is Component Services, open that. Expand Computers then My Computer, then Distributed transaction Controller, then Local DTC. you should be at the bottom of the chain now with two items. Transaction List and Transaction Statistics. The transaction list will show you any transaction in process RIGHT NOW. The transaction statistics gives you numbers based on the counts since the last restart of the DTC. If you are watching that window when your process starts you should see the Active count increment by 1, when it completes the Committed or Aborted counts will likely be incremented. If you know your process is doing something and you don't see any of that it is probably NOT starting a DT.

    Anyway, I totally agree on NOT using linked servers, so with that then are there other choices? The basic operation here is to move some data from one database to another. This is user initiated as only selected data would be moved. The way the sytem works now, a sproc is fed some Ids and the linked server name and dynamic SQL runs the operations under a transaction.

    Obviously one option is to have the client handle loading the data and then saving it to another database. This involves a major refactor but could be done.

    If an SSIS package can do this more efficiently then maybe that is option I'll investigate.

    What is the allowable latency in this process?

    You could certainly write the ids and other needed metadata into a table and then call an SSIS package which then connects to the right place(s) and then transfers the data needed. you also might consider Service Broker if it isn't a lot of data. I don't think I would trust the client app to handle the data movement, it "feels" to far from the data. I can't empirically say why but I don't like that option..

    CEWII

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply