Insignificant change in table size after adding two column

  • Dear Experts,

    Before adding two column in which 1 is bit column & 1 is numeric column the database size is 189 GB but after adding Bit column with true value & numeric column with null values to add the rows (rows are 547,173,777 in the table).

    The database size increased 249 GB. I don't understand why ?

    According to my projection after adding a bit column to the table of 100 GB size with 547,173,777 rows size should not increase more than 1-2 GB.

    But it increased 60 GB.

    I tried rebuild index also and shrink database also but there is no significant change.

    I don't understand why this increase that much?

    Can you please help me in reclaiming the table size?

    Thanks

    Rajat Jaiswal

    http://www.indiandotnet.wordpress.com

  • Hi,

    I don't know why the database size increase drastically. But later on i recovered it

    please see below link.

    http://indiandotnet.wordpress.com/2011/04/27/space-issue-of-sql-server-database-how-to-claim-free-space-of-sql-server/

    Thanks & Regards

    Rajat Jaiswal

    http://www.indiandotnet.wordpress.com

  • Hi Rajat,

    My guess is that your table did a lot of page splits, resulting in a lot of unused space. When you rebuild your clustered index, the table size drops down to the absolute minimum.

    If your table is about 100GB and you have about 500 million rows, your average row size must be around 1800 bytes. Then you are able to fit only 4 to 5 rows on a page. This leads to a lot of unused space.

    You can rebuild or re-organize the clustered index to fix the issue. It will come back as you start to fill the new numeric field, but maybe not as bad as what you have seen.

    HIH,

    Henrik Staun Poulsen

    Stovi Software

  • henrik staun poulsen (4/28/2011)


    Hi Rajat,

    My guess is that your table did a lot of page splits, resulting in a lot of unused space. When you rebuild your clustered index, the table size drops down to the absolute minimum.

    If your table is about 100GB and you have about 500 million rows, your average row size must be around 1800 bytes. Then you are able to fit only 4 to 5 rows on a page. This leads to a lot of unused space.

    You can rebuild or re-organize the clustered index to fix the issue. It will come back as you start to fill the new numeric field, but maybe not as bad as what you have seen.

    HIH,

    Henrik Staun Poulsen

    Stovi Software

    Hi Mr. Henrik Staun Poulsen

    Thanks for the responses.

    Although i have tried with diffrent options for regain space but INDEXDEFRAG gave best result.

    Here i have one more issue related to WaitType i have observed the status of running procedures and found many times the statments were in suspended mode with WaitType PAGEIOLATCH_EX & PAGEIOLATCH_SH.

    I need to update a numeric field on same databae and i am bit affraid because it takes too much time. Any solution for PAGEIOLATCH_EX issues.

    Thanks

    Rajat Jaiswal

    http://www.indiandotnet.wordpress.com

  • Hi Rajat,

    According to dbtuna you are just waiting for the harddisk to repond. Other than getting a faster hard disk (SSD?), I have no suggestions.

    I have just been thru a session of adding a date field to the 6 largest tables in our database, a total of 6 billion rows. I ended up exporting each table into a new table (with the new field). This could be done online. When I had the majority of rows over, I took the system off-line, copied the remainding rows, and renamed the new table into the old table name. So we were only off-line for about 3 hours, which was fine.

    It cost a bit of disk space, but my manager paid for that.

    Best regards,

    Henrik

  • henrik staun poulsen (4/29/2011)


    Hi Rajat,

    According to dbtuna you are just waiting for the harddisk to repond. Other than getting a faster hard disk (SSD?), I have no suggestions.

    I have just been thru a session of adding a date field to the 6 largest tables in our database, a total of 6 billion rows. I ended up exporting each table into a new table (with the new field). This could be done online. When I had the majority of rows over, I took the system off-line, copied the remainding rows, and renamed the new table into the old table name. So we were only off-line for about 3 hours, which was fine.

    It cost a bit of disk space, but my manager paid for that.

    Best regards,

    Henrik

    Hi Mr. Henrik,

    Thanks for reply.

    Surely your idea is sounds good to me but in my current project situation i can not apply it. Any other solution ?

    Thanks & Best Regards

    Rajat Jaiswal

    http://www.indiandotnet.wordpress.com

  • Hi Rajat,

    Depending on your needs, you could add the new column as nullable. This will allow the column to be added very quickly to the table. Then update the values in smaller batches (eg 1000 rows at a time). If need be, you can back up your transaction log every "N" batches.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply