How get replication transactional with more than 500 tables in a database?

  • How about a big publication with all tables or

    some publications with some tables (75-125 by example) or

    many little publications with some tables (5-15 by example).

    The best?

    Thanks.

    Luis.

  • Hi Luis:

    I found some ideas relating to your question in article titled "Snapshot Replication for the Beginner".  Author is suggesting breaking up the larger tables into separate publications

     

    Bill

     

  • That's true. Think about a situation when your subscribers get's desincronized. You don't need to generate snapshot for all tables when you know exactly what are the publications for those articles. Consider grouping them in functionality. And last, this aproach improves performance.

  • Ok Chrislis.

    I´ll do some test grouping:

    1: size of tables

    2: functionality

    3: updates

    or a mix of 1, 2 and 3.

    Thanks everybody.

    I'll write when all it's ok.

  • A couple tips for so many tables:

    - Don't let your snapshot agent initialize your subscribers. Will cause table locks & performance issues. From testing our 50Gb 400 table db, snapshot agent took 8 hours to execute over gigabit backbone. Try backup / restore.

    - Script your publication. You don't want to rebuild with the gui - way too time consuming...

    Best of Luck,

    Chris Becker MCDBA OCP MyDatabaseAdmin.com

    Chris Becker bcsdata.net

  • Ok Chris.

    I've always import suscriptions tables and so initial snapshot performance

    with transactional replication is optimal.

    Later, only the updates are sent between servers.

    Sorry, but what's the meaning of "script the publication"?

    Is that about I rewrite the sp's code and so tunning?

    Thanks.

    Luis.

  • Right-click on publication, "Generate SQL Script"

    Chris Becker bcsdata.net

  • By the way - how big is your database (Mb or Gb) ? How long does it take to deliver snapshot?

    Chris

    Chris Becker bcsdata.net

  • Ok Chris.

    I'll do the tests with all the info the next week.

    I've spoken with responsable person of project and

    we have decided to do 8 groups classified by funcionality.

    They have different numbers of tables by group: from 200 to 20 tables.

    A last group includes the rest of tables (100 aprox.)

    At the next weeks I'll inform about tests.

    I must do more diferent work at administration databases.

    (I'd like to do more work about adminsitration databases, but ...)

    Thanks.

  • 160 MB aprox. without data.

    With data can arrive 1 GB max.

    but I can't tell you,

    I must see at site work

    and study all databases.

    The idea is replicate a database in

    a server, but there are many sites with

    that database and the real idea is to have

    many replications (one to one) from

    that database at the same server.

    Some databases can have more data and others can have less data,

    it depends from work load.

    Ok?

  • Well that explains the snapshot performance. Many tables / not much data.

    I would put all 500 tables in 1 publication to eliminate administrative overhead.

    You will find all publications will use the same log reader agent, and distribution agent for each subscriber. I do not see the benefit of multiple pubs... The only difference is the snapshots can be broken up?? How often are snapshots ran in transactoinal anyway...?  (not much)

    Best of Luck,

    Chris

    Chris Becker bcsdata.net

  • Well, I'll do 2 tests:

    1: with 9 publications (I explained in a post)

    If there is a problem I can identify where is and not affects at all

    published tables.

    2: with an only big publication (with all the tables).

    I eliminate administrative overhead but I must test if it is ok

    and performance is good.

    Luis.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply