Will T-Log disc damage cause DB crash?

  • Hi,

    We are going to migrate our Oracle to SS2k and designing the disk allocation. We need a full recovery and want to put the t-log in a separate disk. My question is, if the disk of t-log failed, will it cause the database stop running or crashed? And how we can recovery the data to point-in-time?

    Thanks in advance.

  • 1. If the TLog disk is damaged, SQL Server won't know where to store transactions. So I believe the answer is yes, a failure of the TLog disk will cause a crash.

    2. You do the point-in-time recovery using TLogs. If your TLog disk fails, you would have to restore your TLog backups from tape (or other disk depending on where you are backing up to). You would restore the TLogs to another drive (RESTORE LOG dbname WITH MOVE).

    Hope that answered your questions. If not let us know.

    -SQLBill

  • SQLBill is right.

    If any of your disk (of cours I mean here disk array - RAID1, RAID5...) hosting the data, index, Log, system, tempdb fails you won't be able to use your system

    Bye

    Gabor



    Bye
    Gabor

  • Thank you. It is different from Oracle which uses redo log files.

    Another question is :

    Our database is about 60 GB, mix with data warehouse stuff and mid-heavy transaction activities. How big the t-log file I should allocate? and is there any rule to allocate data file and t-log file? Can I separate t-logs to 2 files?

    Thank you.

  • How big the tlog gets depends on how big your transactions are and how often you do a full or or incremental backup. If you did a transaction that modified 100M rows by updating large varchars, then the log would be many GBs. (maybe bad idea bc it would log your table for a long time). If you did very many small transactions per day and backed up once/month (bad idea), you log would also grow large.

    If, on the otherhand, you do a log backup every 15 minutes (maybe for log shipping or you are just paranoid), I dont think it will grow very big.

    If disk space is not at a premium and you do not intend on detaching and copying around the database files, set it to 10GB and let it grow (when it grows, it slows slqserver).

  • Hi Jennifer,

    Just to give you general overview: we have a 90 GB database with occasionally rather high activity (updates or inserts of 300.000+ rows in one transaction etc.) and our log size is over 20GB. Since we have enough disk space, we never try to shrink it although most of the time it is almost empty - performance is much better if the system does not need to autogrow the log file.

    We back up the log every 15 minutes during the day and once every hour during the night. Log backup file sizes are generally between 1-10 MB, but after a massive update log backup file can have several GB.

    I think that aberezin is right, 10GB is about the right size to start with - you'll soon see whether it is enough or not. There is no other way than to try; each system has its specifics.

  • When I can, I put log files on a disk by itself; to minimize failure.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply