Hundreds of unique tables

  • First off let me say I am relatively new to SQL Server, so I am a MS Sequel Server idiot and it WILL show in my questions.

    That being said:

    I have been given the task of pulling over 200 unique lists (customer files from different companies) into the DB and throwing up a web UI that allows Data processors here to combine and extract names from any combination of these (what are now) tables.

    On top of that they want to be able to omit past requests from some of the queries they will be pulling.

    I am dealing with Standard Edition 2005.

    Thus far I have created a tool that formats each unique list to have the same 20 columns, BUT each list has to keep some unique columns/data as well.

    I have created a ref. file that standardizes the names of these tables for ease in working with them and I have pulled each table into the db.

    I have no idea how to even approach the problem of the processors being able to omit past requests.

    Any ideas for me here would be VERY appreciated.

  • if you put an identity column or primary key on your search table.

    you could create another table that you write the primary key / identity to when they get accessed (based on user or whatever criteria they give you)

    the left join to that to only get unaccessed records.

    SELECT *

    FROM MySearchTable ST

    LEFT JOIN ExcludeTheseKeys EX ON ST.STKey = EX.STKey

    AND EX.PossiblyMoreCriteria

    WHERE EX.EXKey IS NULL

    if it is more simplistic you could add a new column to the Search Table (PreviouslyAccessed bit) or something and update it when it gets reviewed.

    A few more details on the requirements might lend to a better solution.

  • The problem I see with that (and maybe I just don't understand) is that we will process about 50 requests per month on these tables. Each request needs to be unique as only that cetain request will need to be omitted.

    For instance:

    I will need to pull data from tables 1,2, and 3 and Omit past orders 20356, and 14589.

    It seems a unique identifier (either in a table or a column) whould cause redundant writes and thus screw with the accuracy of my omits.

    Am I wrong?

  • are you talking about something like

    select *

    into #x

    from (

    select orderid, *

    from table1

    union

    select orderid, *

    from table2

    select orderid, *

    from table3) new

    left outer join processed

    on processed.order_id = new.orderid

    where processed.orderid is null

    insert into processed( orderid)

    select orderid from #x

    select * from #x

    So, you select all the orders that are not in the processed table, then add the records you just selected to the processed table, and finally return the items to process...

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply