Improving performance on Moving of data

  • Hi

    I need some advice on a tricky situation. I am extracting a hell of a lot of data to a new database I created for a conversion. The scripts are moving data from one server to a new database on another server.I cannot create temp tables on the source database and can't move each table on its own as there are a number of joins involved in each of the scripts. As the data takes long to dump, I create an initial table which keeps a policynumber and the version number of the policy(short term insurance). Then I run all the detailed data dumps, using this "temp table" on the new database linking it to the other tables on the original database. In this way I ensure that I am running off the same version of the policy even if the full data dump runs for 3 to 4 hours. This is because there are a lot of changes on the database as users are constantly working on this database. I am only running these data dumps once a week until the final data conversion takes place

    The situation is not ideal really, but I need to make do with what I have and can do. Any suggestions? Would indexing help at all?

  • I do not fully understand your situation.

    Are you trying to improve performance of this data backup / copy.

    Can u do a bcp out and bcp in of data from the source to the destination.

    Or may be set up replication / mirroring

    These are just some thoughts I am laying out before you - since I do not understand the full context of what you are trying to do

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply