Application Constantly Dropping and Re-Creating Tables

  • I have a general question that I have not found any information about on the internet about.... constantly dropping and re-creating non-TEMPDB tables.

    We use table Extended Properties to keep track of changes made to "lookup/reference" tables. We have a central database that contains the "master" tables. When a User starts one of our apps, the application is suppose to check the "master database" versus the local database for "changes", using the Extended Properties. However, we recently discovered the methodology used by the developers was not working correctly, so changes to various lookup tables do not appear to be processed/changed.

    The new proposal is to set a "flag" for the tables that need to be synchronized. If the flag is set to 1, the local table would be dropped and re-created each time the application is run.

    FYI, in general, the application(s) are only run once a week, but there are certain instances where multiple application copies can exist on one SQL Server instance, all sharing the same lookup database/tables.

    I am not looking for suggestions on the methodology used to sync the tables, but would like to know if anyone has thoughts or ideas on having the same set of tables dropped and re-created on a weekly basis and the impact on the log file, possible fragmentation, etc. The tables themselves are small (typically 2-5 columns and anywhere from 5 - 100 records each) and we would be looking at only 3-4 tables that would be affected.

    Thanks! 🙂

    Argue for your limitations, and sure enough they're yours (Richard Bach, Illusions)

  • the only issue I've found with this is that the object_id of the tables will change (same as if you drop proc rather than alter proc)

    if you rely on object_id for any reason (such as code comparison using object_definition(id) ) then you might hit a speedbump.

    MVDBA

  • ...the impact on the log file, possible fragmentation, etc. The tables themselves are small (typically 2-5 columns and anywhere from 5 - 100 records each) and we would be looking at only 3-4 tables that would be affected.

    Unless those columns are filled with massive nvarchar(max)/xml/BLOB data, what you describe is less than what the average tempdb goes through every second.

    Dropping five tables that collectively consume less than 50 data pages, creating five tables with a couple indexes each, then writing a handful of rows to each of them really isn't much work.

    Creating a table writes about 20 transaction log records, plus 10 more for each index. For each row inserted, write one t-log record. If the page fills, write 10-20 more records depending on index count. Dropping the table deletes the metadata written during creation (20 log records) plus deallocation (a few more log records per allocated page). Other than the transaction log writes, all the data I/O is deferred unless you are running in the bulk-logged recovery model. This is really just a blip on the radar.

    If the rows for a table fit on a single 8KiB data page, fragmentation isn't even possible. If those tables are filled with BLOB data, then dropping the tables and rebuilding them instead of deleting the rows and refilling them is better for controlling BLOB fragmentation, because SQL2005 does a messy job of cleaning up after BLOBs.

    Have fun with the sync work.

    -Eddie

    Eddie Wuerch
    MCM: SQL

  • By dropping the tables each time, you could break something that is dependencies related.

    For example, you had to grant permission on a table for whatever reason. When you drop the table, that permission will be dopped and not be recreated when you recreate the table.



    Alvin Ramard
    Memphis PASS Chapter[/url]

    All my SSC forum answers come with a money back guarantee. If you didn't like the answer then I'll gladly refund what you paid for it.

    For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • Thanks for all of the inputs so far.

    FYI, since this is a somewhat ambiguous question...

    • Almost all of the tables have INT/VARCHAR datatypes. Definitely no BLOB, Text, NText, image, etc. datatypes.

    • Any cases where we might have to deal with "OBJECT_ID", we convert somehow to "OBJECT_NAME" to avoid any issues.

    i.e., IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[MyTable]') AND .....)

    IF EXISTS (SELECT * FROM SYS.OBJECTS WHERE NAME = 'MyTable' AND ...)

    IF NOT EXISTS (SELECT * FROM SYS.sql_modules WHERE definition LIKE '%(18)%change table joins. ME 7/9/15.%' AND object_id = OBJECT_ID('usp_MyStoredProcedure'))

    • Permissions are at the database level, not at the table level for our applications.

    I do agree the overall impact on the log file(s) should be minimal, but something just rubs me the wrong way when a table is dropped and re-created automatically when there is nothing wrong with it, not because it deviates from the "standard". But I also imagine a lot of the users do *not* do any backups on their SQL Server [Express] instances and we currently do not provide that capability.

    Argue for your limitations, and sure enough they're yours (Richard Bach, Illusions)

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply