_DBREINDEX and MDF file size

  • I am running a _dbreindex database job in production that keeps failing with the following error. Could not allocate space for object in database '*' because the 'PRIMARY' filegroup is full..     

    size of drive 49.9G, (mdf file is the only thing on the drive)

    size of datafile prior to running job 26.7 G,

    I checked the size after the job failed and it was 41 G.

    I have tried shrinking the database first to reduce the file size, and then reindexing but nothing seems to work, I would like to use adding additional space as a last resort. Is there a workaround or patch?

    DBCC SHRINKDATABASE

        ( databasename, 10)

    go

    exec _dbreindex databasename

    go

    Any suggestions? 

    Catherine

  • you could try reindexing the tables, few at a time by opening a cursor. the problem is that there is not enough space on the disk. Few tables can be reindexed. Once the reindex is done, shrinking the database will get rid of all the unused space on the file.

  • The first time that I tried reindexing all of my tables in SQL2000 I ran into the exact same issue.  I just had to keep expanding the database until it eventually finished. I don't like to let the databases grow automatically. THe thing is, I had plenty of disk space to let the reindexing job finish.  The other post makes sense for you.  Just break up your indexing of your tables by dong them in chunks.  In order to perform the reindexing you need a lot more space than your database currently has been allocated. I don't think there is any way around this issue other than having more disk space or performing the reindexing in chunks of tables...

  • Since I have a limited amount of space would you suggest reindexing a table, doing a shrinkfile, and then reindexing another table, and shrinking again?

    Thanks,

    Catherine

  • The space that you use for the reindexing will just become part of the free space when you look at your database after the reindexing.  If you observe your used space, it should not get larger each time you reindex.  It's just that reindexing needs the extra space to perform the reindex.  For example, I have a 32GB database but I need close to 45GB to perform a reindexing of the whole database.  But after the weekly reindexing the used space of my database doesn't grow in size, it goes back to the 32GB.  I would just expand your database as far as you can expand it (assuming you don't need the extra space for something else) and then just reindex a group of tables at a time. After doing this for a bit you will get a good feel for how many tables you can perform a reindexing on before you get an error due to insufficient space.  Then you can reindex the next chunk...

  • You dont have to do that one table at a time. You can place the names of the tables in a table and then select a few tables at a time (maybe select the top 10 names) and first reindex those in a loop. Once that is completed DO NOT shrink the file till you finish the entire reindexing. Because when you reindex all the empty space would be towards the end of the file and the same space will be used to perform other reindexes. if you shrink the file each time, when you start the reindex process, the file should again grow. So, you can just perfrom the reindex of all your tables in batches and then finally shrink the file.

    Hope that helps.

    Manju

  • I was able to reindex two of my larger tables using dbcc reindex tablename, this weekend I will run the dbcc reindex table in batches to test to see how many I can reindex without running out of space. Currently I am hard coding the tablenames in and running them in a job. What is the best way to get my table names for a specific database into a table, I have 700 tables for this database.

    Thanks,

    Catherine

  • use <database>

    insert into <table_name>

    SELECT table_name FROM information_schema.tables

    WHERE table_type = 'base table'

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply