Peer-to-peer replication questions

  • Hi everyone,

    We are at the beginning phase of designing a solution for the following setup:

    1. We have 3 servers, one in-house, two in the cloud.

    2. We need to replicate some tables from the in-house server to the cloud servers.

    3. All servers will have to be able to read/write to the tables and share the changes with each other.

    Based on the requirements, it looks like peer-to-peer replication is the solution for us. I have some questions though, as I don't have any experience with replication:

    1. Is it possible to replicate some objects instead of the whole database in peer-to-peer replication?

    2. Considering that there will be 3 servers that will all have to share changes with each other, would you consider this a difficult design to set up and maintain?

    3. How good is the SQL 2012 conflict detection? I'm worried about a lot of conflicts and problems as a result of that.

    Thanks in advance!

  • Hi shahgols, 😀

    I'm not an expert on Replication, but i'll try to give you an answer :

    1) Yes, you can replicate some objects.

    2) Not sure how to respond to this as i never had to setup this kind of scenario.

    3) I think you need to setup merge replication between your servers, instead of peer to peer replication, in order to resolve possible conflicts.

    Hope it helps. 😎

    Jonathan Bernardez Bernardez
    ___________________________________________________________
    DBD. MCSA SQL Server 2012

  • Thank you Jonathan!

    Would love to hear back from someone who has experience with peer-to-peer. Thanks.

  • We currently use P2P replication betwen two AWS availability zones. We also use normal transactional replication to scale out within each AZ to provide enough power to meet our read workload. (Think of each P2P node as the hub of a wheel and P2P replication as the axle joining the wheels, then think of the spokes on each wheel being our scale-out replication.)

    Last year we migrated from traditional data centres to AWS and had P2P running between all locations. Our old setup used log shipping to provide a DR capability, so all our applications wrote to a single server, and we carried this concept forward when we changed to P2P.

    One of the biggest problems with any type of P2P or merge replication is dealing with conflicts. There are two main approaches to resolving this:

    a) Design the possibility of conflicts happening out of the system

    b) Write a load of custom code to deal with the conflicts you expect, and hope that nothing unexpected happens

    The simplest way to design conflicts out of the system is to direct all writes to a single node within P2P. We use a DNS vanity name in all our connection strings to identify which P2P node should be written to. This allows us to change the 'master write' server very quickly, and we now have a policy of doing this every 3 months as part of our DR testing.

    You can reserve identity column ranges for each P2P node, which is another way to avoid conflicts. If you use dedicated identity column ranges this also allows you to send writes to any P2P node. However, this aproach does require skillful management of the identity column ranges in order to avoid conflicts happening.

    A third approach is to always use a composite key as the PK of your tables, with one column having a unique value for each P2P node. This avoids identity range management, but needs the applications to be designed to cope with using the composite key.

    You need to assess the risks of each approach to see what is best for you. For us the first option came out as lowest risk and lowest complexity.

    You also need to decide how you will do schema changes. If you always initiate a schema change from the same P2P node, then you do not have to worry about quiescing the system between schema changes. If you fire off schema changes from each node, you must make sure each change is propogated to all nodes before initiating a schema change from another node.

    Conflict resolution within P2P is simple: either fail the replication attempt if a conflict occurs, or log the conflict and let the change from lowest number node win. You can then look at the conflict log tables and decide what you want to do with the values that were lost.

    With Merge replication you can write your own logic to deal with conflicts. This obviously gives more options, but cannot prevent all situations where data loss can occurr.

    My advice is to design conflicts out of the system, rather than expecting conflicts and hoping you can deal with them.

    Finally, our experience in running P2P is that it is just as reliable as normal SQL transactional replication. In other words, it normally works very well, but one or two times a year something unexpected happens that takes a lot of work and skill to fix.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Wow, thank you very much Ed, very helpful!

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply