Max partitions in a single table

  • What is the max number of partitions that a single table can have?

  • I found an article that mention that I can have 1000partition in a table.

    i have a range of records that I need to partition. My range is from 360000000 to 380000000, so I can have 1000 partitions of every 20,000 records...

    advice?

  • Yes, 1000 is the max # of partitions you can have on a table or index. One thing to consider in partitioning is how many rows you want to end up in each partition. At some point, you'll want to do some predictive analysis on your data to forcast growth and row counts per partition so that you end up with the optimal # of rows per partition. 20,000 would be great and perform great, but you'd have to balance that out with managing 1000 partitions. Continuing from our conversation in your other tread, this would be a case where you'd want to manage your filegroups a bit differently soas to avoid dealing with 1000 files.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • I thought that. I dont want to deal with 1000 files.

    What do you think about this?

    The range of ID is big and the records only fit the range of one of the partitions right now. These ranges will grow slowly over the years. This Range ID will be part of my queries in many reports that I have. My clustered Index has 2 fields (RangeID, BoxID).. BoxID is part of the query join as well. This BoxId has less range of records.

    I mean one RangeID can have one or many BoxID. Now, Im not sure if the partition will be better with the RangeID or the BoxID. The BoxId is a FK in this table and its part of a clustered index as well. I have 102 Millions of records in this table and its growing.

    I appreciate the advice.

    Thank you

  • I just checked the range with the people use it. and the range is for over 25 years. So, I dont need partition the whole Range. The Range I need are as follow:

    360000000 to 360120000

    370000000 to 370140000

    so I will only create 156 partitions of ranges of 1000 for this case..

    Thank you 🙂

    I will create 2 filegroups and 2 files for this case.

  • Is this for the same database as your other partitioning work? If so, you may consider adding the date column to this table and partitioning everything by date. This will ensure that all of your data for a given date range lives inside of the same partition. Just something to condider. Yes, you'd have to add the date column to the table if it does not exist. Depending on your data, if it can be arranged by date, it would make sense to keep the data together within a partition.

    If you choose to go this route, you'd add the date column and use the same partition function/scheme as you did for your other table.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • Yes it is the same database. And it has a datetime column. But, this table join only for the ID, we dont do anykind of reports for the date (from this table). That's way i was planing to do a Key Partition (int) for one of the clistered index.

    If I do the partition for datetime, it will work in my queries when the joinId is the RangeID and BoxID ????

  • Before actually start using partitions, you really need to read technical docs about it !!

    If you don't keep the guidelines in mind, you may end up with bad performing objects, not serving you as one would expect, bad recovery options, ...

    Books online and technet have good info !

    Start here:

    - http://technet.microsoft.com/en-us/library/ms188706%28SQL.90%29.aspx

    - http://technet.microsoft.com/en-us/library/cc966412.aspx

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • now, im confused

  • this is what I did:

    ALTER DATABASE [DATABASE ]ADD FILEGROUP RANGEID

    ALTER DATABASE DATABASE

    ADD FILE

    (

    NAME = 'RANGEID'

    , FILENAME = 'F:\MSSQL.1\MSSQL\Data\RANGEID.ndf'

    , SIZE = 20000 KB

    ,FILEGROWTH = 10%

    ) TO FILEGROUP [RANGEID]

    CREATE PARTITION FUNCTION function (int)

    AS RANGE RIGHT

    FOR VALUES

    (

    360000000 ,360001000 ,360002000 ,360003000 ,360004000 ,360005000 ,360006000 ,360007000 ,360008000

    ,360009000 ,360010000 ,360011000 ,360012000 ,360013000 ,360014000 ,370000000 ,370001000 ,370002000

    ,370003000 ,370004000 ,370005000 ,370006000 ,370007000 ,370008000 ,370009000 ,370010000 ,370011000

    ,370012000 ,370013000 ,370014000 ,370015000 ,370016000 ,370017000 ,370018000 ,370019000 ,370020000

    ,370021000 ,370022000 ,370023000 ,370024000 ,370025000 ,370026000 ,370027000 ,370028000 ,370029000

    ,370030000 ,370031000 ,370032000 ,370033000 ,370034000 ,370035000 ,370036000 ,370037000 ,370038000

    ,370039000 ,370040000 ,370041000 ,370042000 ,370043000 ,370044000 ,370045000 ,370046000 ,370047000

    ,370048000 ,370049000 ,370050000 ,370051000 ,370052000 ,370053000 ,370054000 ,370055000 ,370056000

    ,370057000 ,370058000 ,370059000 ,370060000 ,370061000 ,370062000 ,370063000 ,370064000 ,370065000

    ,370066000 ,370067000 ,370068000 ,370069000 ,370070000 ,370071000 ,370072000 ,370073000 ,370074000

    ,370075000 ,370076000 ,370077000 ,370078000 ,370079000 ,370080000 ,370081000 ,370082000 ,370083000

    ,370084000 ,370085000 ,370086000 ,370087000 ,370088000 ,370089000 ,370090000 ,370091000 ,370092000

    ,370093000 ,370094000 ,370095000 ,370096000 ,370097000 ,370098000 ,370099000 ,370100000 ,370101000

    ,370102000 ,370103000 ,370104000 ,370105000 ,370106000 ,370107000 ,370108000 ,370109000 ,370110000

    ,370111000 ,370112000 ,370113000 ,370114000 ,370115000 ,370116000 ,370117000 ,370118000 ,370119000

    ,370120000 ,370121000 ,370122000 ,370123000 ,370124000 ,370125000 ,370126000 ,370127000 ,370128000

    ,370129000 ,370130000 ,370131000 ,370132000 ,370133000 ,370134000 ,370135000 ,370136000 ,370137000

    ,370138000 ,370139000 ,370140000 )

    CREATE PARTITION SCHEME scheme

    AS PARTITION function ALL TO (RangeID)

    ALTER TABLE dbo.table

    ADD CONSTRAINT PK_pk PRIMARY KEY(rangeID, BoxID, Position)

    with (ignore_dup_key = on) on [scheme] (RangeId)

  • MTY-1082557 (11/9/2009)


    now, im confused

    On what?

    I think what ALZDBA is getting at is don't get too partition happy. In other thread that you and I worked on, partitioning seemed to be a good solution to keep you from having to move older data off to another DB. It also gives you the added benefit of better query performance (when querying w/ the partition key) and some possible import improvements.

    So partitioning worked well for that scenario. It solved some problems that you had with your current design. This does not mean that partitioning is the answer every time. There are many things to consider when working with table partitioning such as maintaining the partition functions/schemes, working with additional (if configured that way) files/filegroups, backup/restore implications, etc.

    There are give and takes with using partitioning, so just be warned and fully consider the use before jumping in with both feet.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • That's indeed what I intended to John ! Thank you for stating it more clearly !

    btw I wasn't aware you guys did already work on the subject on

    anohter thread.

    To OP:

    - Pre-size your db-files to the best of your knowledge !

    - if you need autoextend, do it in reasonable amounts of MB not in percentages !! to avoid OS file level fragmentation

    - implement as many best practices as you can !

    Why would you partition on a 20MB file anyway ?

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Why would you partition on a 20MB file anyway ?

    [/quote]

    Actually, I made a mistake in the file size. After I created my partition and move the data into that. the file grows to 8GB. Each partition has aprox 1million of records. As you can see, Im new working with this stuff. But, I really having fun learning the way to do it.

    I tried couple of my queries and I havent seen to much performance. I can see in the Execution Plan that the query take the information from the partition.

    I needed to do a partition to this table because it's growing fast and I can archive my data this way. Plus it can "prformance" some of the queries where the KeyPartition in all the queries is the DataType Int column.

    any other advices?

  • ALZDBA (11/10/2009)


    That's indeed what I intended to John ! Thank you for stating it more clearly !

    btw I wasn't aware you guys did already work on the subject on

    anohter thread.

    To OP:

    - Pre-size your db-files to the best of your knowledge !

    - if you need autoextend, do it in reasonable amounts of MB not in percentages !! to avoid OS file level fragmentation

    - implement as many best practices as you can !

    Why would you partition on a 20MB file anyway ?

    Yea, we discussed using partitioning in place of an archive process that moved data out of a table into a seperate database. Adding the partitioning and partitioning by date allowed for all of the data, historical and current, to live in one DB without affecting query performance. Here's the tread if you want the background: http://qa.sqlservercentral.com/Forums/Topic811677-146-3.aspx#bm815198

    This thread is apparently for another table, I assume in the same DB. That's why I recommended partitioning by date again if possible. Especially to simplify files/filegroups.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • This thread is apparently for another table, I assume in the same DB. That's why I recommended partitioning by date again if possible. Especially to simplify files/filegroups.

    Yes, this table is in the same Database. After my partitions by Int. I checked my queries and the performance is very minimum. The Execution Plan shows the partition selected. But, it doesnt improve performance.

    My queries joined more than 3 tables. As I mentioned before, the only join btw the partitioned table and the rest of the tables is the RangeID. Since this is not a preformance for my queries.

    I want to know if I change the partition by date I will be able to see a performance.

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply