Transactional Replication over WAN

  • Hi,

    I have a SQL2K server that needs to replicate about 3 million transaction daily to a remote reporting server (SQL2005). The publisher is only updated once a day. We have an overnight ETL process that updates the local server every night with daily transactions from a mainframe OLTP. We then need to apply these changes to the remote server.

    Currently there is a slow WAN connection that will need to be upgraded. Before we upgrade the line, I'm wondering if 3 million transactions in one shot is even doable over a WAN. Is compression an option I can configure within SQL or do I need a separate routine to do this? Also, is it better if the distributor is on the local or remote server? It must be a pull subscription. The client will not allow us to push the data to their network.

    Thanks in advance,

    Dave

  • distributor must be as close as possible to publisher.

    as far as 3 million transactions - you can consider testing to measure latency. the best way with replication is to let a snapshot deliver that many rows, this is much more efficient via bcp which will bread down in batches, otherwise 3 million individual inserts must travel over wire. You can adjust your snapshot batch sizes to appropriate level for wan link.

    you may want to consider another tool - dts/ssis/ftp

    ChrisB MCDBA

    MSSQLConsulting.com

    Chris Becker bcsdata.net

  • Chris Becker (11/29/2007)


    distributor must be as close as possible to publisher.

    as far as 3 million transactions - you can consider testing to measure latency. the best way with replication is to let a snapshot deliver that many rows, this is much more efficient via bcp which will bread down in batches, otherwise 3 million individual inserts must travel over wire. You can adjust your snapshot batch sizes to appropriate level for wan link.

    you may want to consider another tool - dts/ssis/ftp

    ChrisB MCDBA

    MSSQLConsulting.com

    Chris,

    Thanks for the reply. It's been awhile since I checked this forum. I just wanted to give an update and see if anyone has other suggestions.

    I've been doing some testing on this. The size of the uncompressed transaction file is typically about 1.5G per day. It takes too long over the current wan link. We cannot do a daily Snaphot, BCP or DTS. Our client said it's important that they only get the incremental changes. For whatever reason they cannot have the tables truncated and reloaded every night as part of this process.

    We've been experimenting with Redgate Data Compare to create the transaction file. The Mainframe updates one db at night and we run a compare against a copy of last night's db. Last night the change log was 1.65G. We can compress this and FTP it to the client. The problem is i-sql cannot commit a file this big so we need to find another method. Redgate itself can perform the synchronization to a local server quite fast but it's different when you use Redgate to create the file and use another tool for the syncronization.

    Thanks,

    Dave

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply