Blog Post

Recovery Models and Backup Strategies for SQL Server 2008

,

I had a question via e-mail from one of my new students at University of Denver, University College this morning, asking why the transaction log on one of his SQL Server databases seemed to fill up so quickly, and then grow in size.  This is probably one of the most common problems that novice and “accidental” DBAs run into, so I thought I would try to address it here.

SQL Server databases can use one of three different recovery models; Full, Bulk-Logged, or Simple. Depending on what edition (Express, Standard, Developer, or Enterprise) of SQL Server you are using, SQL Server will either set the recovery model for a newly created database to either Simple or Full by default. Regardless of what SQL Server does by default, you should be aware of what recovery model each of your databases is using, and make sure that is the correct choice to help you meet your business requirements for things like recovery time objective (RTO), and recovery point objective (RPO).

In most production situations, you will probably want to be running with Full recovery model. If you are using the Full recovery model, you have to periodically backup your transaction log, with the default truncate option. This basically allows the space inside the log to be reused. Its like draining the water out of a bathtub.  If you don’t do this, the transaction log will eventually fill up completely. If autogrowth is enabled for the transaction log file, it will then grow, and continue to grow each time the log file fills up, until you run out of disk space, or you hit the file growth limit for that file. Then, you have a read-only database until you correct the issue.

I have seen this scenario so many times! Someone has a small, 100MB database that is running in Full recovery model, and they never do any transaction log backups. After a few weeks or months, they discover that their transaction log file is 200GB in size, and they have a horde of angry users who want to know why they are getting errors about the transaction log being full and unable to grow. This is an easy situation to prevent if you implement an effective backup strategy. Depending on the RTO and RPO requirements for the business, you need to create a schedule of when you take Full, Differential, and Transaction Log backups. You also need to consider how much your data changes so you have an idea how large to make your transaction log file, so that it does not fill up completely in between transaction log backups.

Don’t forget to consider things like index creation and maintenance (which generate lots of transaction log activity) as you are deciding how large to make your transaction log file.  I typically size my transaction log files to be large enough to withstand at least 12 hours of normal write activity, so that I have a nice cushion to avoid ever having auto grow from kicking in. It is much better to manually manage your transaction log file size, leaving auto grow enabled only as a safety measure.

For example, depending on your RTO and RPO, you might decide to have a Full database backup every night at 1AM, Differential backups at 8AM, 12PM, and 6PM, and Transaction Log backups every thirty minutes. You would want to manually grow your transaction log file to be large enough to hold at least 12 hours of normal write activity. Remember, Windows Instant File initialization does not work with transaction log files, so it will take some time to create or grow a large transaction log file.

The worst scenario is to start off with a very small transaction log file, that has auto grow enabled, with a very small file growth setting. This will cause the transaction log file to grow often, in small increments, which will cause your virtual log file (VLF) count to be very high, which can cause lots of problems. You can run the DBCC LogInfo command against your database to find out how many VLFs are in the transaction log file, the fewer, the better.

The script below illustrates some of these concepts. It is meant for use with SQL Server 2008 Enterprise or Developer Edition.

-- Adventures with SQL Server 2008 Backup commands
-- Glenn Berry 
-- March 2010
-- http://glennberrysqlperformance.spaces.live.com/
-- Twitter: GlennAlanBerry
-- Make sure you have these directories:
-- C:\SQLData
-- C:\SQLLogs
-- C:\SQLBackups
-- These are some basic backup related commands
USE [master];
GO
-- Check recovery model for user databases
SELECT name, recovery_model_desc, log_reuse_wait_desc, database_id
FROM sys.databases
WHERE database_id > 4;
-- Change recovery model for AdventureWorks
ALTER DATABASE [AdventureWorks] SET RECOVERY SIMPLE WITH NO_WAIT;
GO
ALTER DATABASE [AdventureWorks] SET RECOVERY BULK_LOGGED WITH NO_WAIT;
GO
ALTER DATABASE [AdventureWorks] SET RECOVERY FULL WITH NO_WAIT;
GO
-- Take full backup of AdventureWorks with backup compression
BACKUP DATABASE [AdventureWorks] TO  DISK = N'C:\SQLBackups\AdventureWorksFullCompressed.bak' WITH NOFORMAT, INIT,  
NAME = N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10;
GO
-- Take full backup of AdventureWorks without backup compression
BACKUP DATABASE [AdventureWorks] TO  DISK = N'C:\SQLBackups\AdventureWorksFull.bak' WITH NOFORMAT, INIT,  
NAME = N'AdventureWorks-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, NO_COMPRESSION,  STATS = 10
GO
-- Take differential backup of AdventureWorks with backup compression
BACKUP DATABASE [AdventureWorks] TO  DISK = N'C:\SQLBackups\AdventureWorksDifferential.bak'
WITH  DIFFERENTIAL , NOFORMAT, INIT,  NAME = N'AdventureWorks-Differential Database Backup', 
SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10;
GO
-- Take a transaction log backup of AdventureWorks with backup compression
BACKUP LOG [AdventureWorks] TO  DISK = N'C:\SQLBackups\AdventureWorks.trn' WITH NOFORMAT, INIT,  
NAME = N'AdventureWorks-Transaction Log  Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10;
GO
-- Create a small database to play with
-- Drop database if it exists
IF  EXISTS (SELECT name FROM sys.databases WHERE name = N'TestDB')
    BEGIN
        ALTER DATABASE [TestDB] SET  SINGLE_USER WITH ROLLBACK IMMEDIATE;
        DROP DATABASE [TestDB];
    END
GO
-- Create new database (change file paths if needed)
-- This purposely uses very small file and growth sizes
CREATE DATABASE [TestDB] ON  PRIMARY 
( NAME = N'TestDB', FILENAME = N'C:\SQLData\TestDB.mdf' , 
  SIZE = 3MB , FILEGROWTH = 1MB )
 LOG ON 
( NAME = N'TestDB_log', FILENAME = N'C:\SQLLogs\TestDB.ldf' , 
  SIZE = 1MB , FILEGROWTH = 1MB)
GO
-- Make sure database is using Full recovery model
ALTER DATABASE [TestDB] SET RECOVERY FULL WITH NO_WAIT;
GO
-- Take full backup of TestDB with backup compression
BACKUP DATABASE [TestDB] TO  DISK = N'C:\SQLBackups\TestDBFullCompressed.bak' WITH NOFORMAT, INIT,  
NAME = N'TestDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10;
GO
-- Switch to new database
USE TestDB;
GO
-- Do some things that will grow the database and fragment indexes
-- Create a "bad" table 
-- (never use a UNIQUEIDENTIFIER for your PK, clustered index!)
IF  EXISTS (SELECT * FROM sys.objects 
            WHERE object_id = OBJECT_ID(N'[dbo].[TestBadTable]') 
            AND type in (N'U'))
DROP TABLE [dbo].[TestBadTable];
GO
CREATE TABLE [dbo].[TestBadTable](
    [myID] [uniqueidentifier] NOT NULL,
    [myColumn] [varchar](2000) NULL,
 CONSTRAINT [PK_TestBadTable] PRIMARY KEY CLUSTERED 
(
    [myID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, 
 ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY];
GO
-- Find out how many VLFs you have
DBCC LogInfo;
-- Insert some data to fragment the clustered table
SET NOCOUNT ON;
DECLARE @iteration INT = 0;
WHILE @iteration < 20000
BEGIN
    INSERT INTO dbo.TestBadTable(myID, myColumn)
    SELECT NEWID(), REPLICATE('A', 2000);
    
    SET @iteration = @iteration + 1;
END
GO
-- End of insert loop
-- Check Table size
EXEC sp_spaceused TestBadTable, True;
-- Delete all of the rows to generate more log activity
DELETE 
FROM dbo.TestBadTable;
-- Check log reuse wait description for TestDB
SELECT name, log_reuse_wait_desc
FROM sys.databases
WHERE name = 'TestDB';
-- Find out how many VLFs you have
DBCC LogInfo;
-- Individual File Sizes and space available for current database
-- Log file has grown...
SELECT name AS [File Name] , physical_name AS [Physical Name], size/128 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;
-- Take a transaction log backup of TestDB with backup compression
BACKUP LOG [TestDB] TO  DISK = N'C:\SQLBackups\TestDB.trn' WITH NOFORMAT, INIT,  
NAME = N'TestDB-Transaction Log  Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10;
GO
-- Check log reuse wait description for TestDB
SELECT name, log_reuse_wait_desc
FROM sys.databases
WHERE name = 'TestDB';
-- Individual File Sizes and space available for current database
-- Transaction log file is same size, but mostly empty
SELECT name AS [File Name] , physical_name AS [Physical Name], size/128 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;
-- Shrink the transaction log file
DBCC SHRINKFILE (N'TestDB_log' , 0, TRUNCATEONLY);
GO
-- Find out how many VLFs you have (should be lower)
DBCC LogInfo;
-- Take another transaction log backup of TestDB with backup compression
BACKUP LOG [TestDB] TO  DISK = N'C:\SQLBackups\TestDB.trn' WITH NOFORMAT, INIT,  
NAME = N'TestDB-Transaction Log  Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10;
GO
-- Shrink the transaction log file again
DBCC SHRINKFILE (N'TestDB_log' , 0, TRUNCATEONLY);
GO
-- Find out how many VLFs you have (should be even lower)
DBCC LogInfo;
-- Grow the transaction log file back to a reasonable size
-- with a reasonable file growth size
USE [master]
GO
ALTER DATABASE [TestDB] MODIFY FILE ( NAME = N'TestDB_log', SIZE = 500MB , FILEGROWTH = 500MB )
GO
-- Find out how many VLFs you have
DBCC LogInfo;

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating