Click here to Skip to main content
15,867,453 members
Articles / Database Development / SQL Server / SQL Server 2008

Procedure to Shrink the Database Transaction Log

Rate me:
Please Sign up or sign in to vote.
4.95/5 (26 votes)
25 Jul 2009CPOL2 min read 58.5K   306   37   18
This article will demonstrate the procedure to shrink the database transaction log.

Introduction

What is Shrink Database? In reality to increase SQL Server disk space, shrink database task is used. This article will demonstrate the procedure to shrink the database transaction log.

Background

Occasionally, you might need to conduct database maintenance whether you have the pre-packaged MSDE or the full version of SQL Server. This maintenance should be conducted after you have deleted large amounts of data from the database (Example.mdf and Example_log.ldf). Usually this occurs if you delete a large number of events from the event log. Even after data is removed, the .mdf and .ldf files need to be compacted (similar to an Outlook *.pst file after deleting emails) in order to recover disk space and increase efficiency, and also to reduce seek time.

More details will be available from this link.

Using the Code

Before we start using this code / how the code works as well, I would like to share some basic information regarding the database shrink process.

The process has several control parameters, most of the time you only need to worry about the first four as these are the big controls whereas the fifth is simply a fine tuning control which rarely needs to come into play.

Parameter Description at a Glance

SQL
SET @LogicalFileName =’Logical file name’; -- Your Database Name
SET @MaxMinutes = 5; --Limit on time allowed to wrap log in minutes
SET @NewSize =100; --Ideal size of logfile in MB
SET @Factor = 1.0;
/*
Factor determining maximum number of pages to pad out 
based on the original number of pages 
in use (single page = 8K). Values in the range 1.0 - 0.8 
seems to work well for many databases. 
Increasing the number will increase the maximum 
number of pages allowed to be padded, which 
should force larger amounts of data to be dropped 
before the process finishes. Often speeds up 
shrinking very large databases which are going 
through the process before the timer runs out. 
Decreasing the number will decrease the maximum number 
of pages allowed to be padded, which 
should force less work to be done. Often aids with 
forcing smaller databases to shrink to minimum 
size when larger values were actually expanding them.
*/  
DBCC SHRINKDATABASE

Shrinks the size of the data and log files in the specified database.

More details will be available from this link.

Using this code is very simple. Just follow the given procedure:

  1. Set the name of your database.

    Example: This is the name of the database for which the log will be shrunk.
    USE [databasename] put the logical name of the database to replace the database name.

  2. Set the logical file name.

    Example: Use sp_helpfile to identify the logical file name that you want to shrink.

    SQL
    SET @LogicalFileName = 'database_Log';

*Do the same here replace the database_log for the database log name.

*Run the script, if it doesn't run then it's down to the logical name of the database that isn't right.

Note: This has been tested successfully on SQL 2000, 7 and 2005 databases. 40 GB went down to under 1 GB, 176GB went down to 105MB.

The code example is given below:

SQL
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

/*
EXEC dbo.spShrinSQLServerTransactionLog 'TempE'
*/

-- =============================================
-- Author:        <Author,,Md. Marufuzzaman>
-- Create date: <Create Date,,24/07/2009>
-- Description:    <Description,, 
-- How to Shrink the SQL Server Transaction Log Using T-SQL>
-- =============================================

CREATE PROCEDURE [dbo].[spShrinSQLServerTransactionLog] 
 @DB_NAME  VARCHAR(100)
AS
BEGIN

/*
Shrink a named transaction log file belonging to a database

Originally found at;

http://support.microsoft.com/support/kb/articles/q256/6/50.asp

Changes:
28.04.2004
Modified the inner loop so it tested the dx time so long overruns did not happen
Modified the inner loop so it had a fixed minimum quantity 
			so there was no skip in skip out

25.07.2009
Reodified the inner loop so it had a dynamic minimum quantity to allow faster shrinkage

25.07.2009
Reodified the varchar type data conversion

*/

SET NOCOUNT ON

DECLARE  @LogicalFileName SYSNAME
        ,@MaxMinutes INT
        ,@NewSize INT
        ,@Factor FLOAT

/*
The process has several control parameters, 
most of the time you only need to worry about the first four
as these are the big controls whereas the fifth is 
simply a fine tuning control which rarely needs to 
come into play.
*/

--This is the name of the database for which the log will be shrunk.
--USE  @DB_NAME

--Use sp_helpfile to identify the logical file name that you want to shrink.
SET @LogicalFileName = @DB_NAME;
--Limit on time allowed to wrap log in minutes
SET @MaxMinutes = 5;
--Ideal size of logfile in MB
SET @NewSize =100;

/*
Factor determining maximum number of pages to pad out 
based on the original number of pages in use 
(single page = 8K).  Values in the range 1.0 - 0.8 seems to work well for many databases.

Increasing the number will increase the maximum number 
of pages allowed to be padded, which should
force larger amounts of data to be dropped before 
the process finishes.  Often speeds up shrinking
very large databases which are going through the process before the timer runs out.

Decreasing the number will decrease the maximum 
number of pages allowed to be padded, which should
force less work to be done.  Often aids with forcing 
smaller databases to shrink to minimum size
when larger values were actually expanding them.

*/
SET @Factor = 1.0;                        

/*
All code after this point is driven by these parameters 
and will not require editing unless you need to 
fix a bug in the padding/shrinking process itself.
*/

-- Setup / initialize
DECLARE @OriginalSize INT,
        @StringData VARCHAR(500)

SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName;

SELECT @StringData = 'Original Size of ' + db_name() + ' LOG is ' + 
    CONVERT(VARCHAR,@OriginalSize) + ' 8K pages or ' + 
    CONVERT(VARCHAR,(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName;

PRINT @StringData;
PRINT ''

--Drop the temporary table if it already exists
IF ( OBJECT_ID('[dbo].[DummyTransaction]') IS NOT NULL )
  DROP TABLE [DummyTransaction]

CREATE TABLE [DummyTransaction]( [DummyColumn] CHAR(8000) NOT NULL );

DECLARE @Counter   INT,
        @MaxCount  INT,
        @StartTime DATETIME,
        @TruncLog  VARCHAR(500)

-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)

SET @TruncLog = 'BACKUP LOG [' + db_name() + '] WITH TRUNCATE_ONLY';
EXEC (@TruncLog)

-- Configure limiter
IF @OriginalSize / @Factor > 50000
    SET @MaxCount = 50000
ELSE
    SET @MaxCount = @OriginalSize * @Factor

-- Attempt to shrink down the log file
PRINT 'Minimum Quantity : '+ CAST( @MaxCount AS VARCHAR(10) )
PRINT 'Maximum Time : '+ CAST( @MaxMinutes AS VARCHAR(10) )+' _
	minutes ('+CAST( @MaxMinutes*60 AS VARCHAR(10) )+' seconds)'
PRINT ''

SET @Counter = 0;
SET @StartTime = GETDATE();

--loop the padding code to reduce the log while
-- within time limit and 
-- log has not been shrunk enough
WHILE (
    (@MaxMinutes*60 > DATEDIFF(ss, @StartTime, GETDATE())) AND
    (@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND
    ((@OriginalSize * 8 / 1024) > @NewSize)
)
BEGIN --Outer loop.

    --pad out the logfile a page at a time while
    -- number of pages padded does not exceed our maximum page padding limit
    -- within time limit and 
    -- log has not been shrunk enough
    WHILE (
        (@Counter < @MaxCount) AND 
        (@MaxMinutes*60 > DATEDIFF(ss, @StartTime, GETDATE())) AND
        (@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND
        ((@OriginalSize * 8 / 1024) > @NewSize)
    )
    BEGIN --Inner loop
        
        INSERT INTO [DummyTransaction] VALUES ('Fill Log')  -- Because it is 
					-- a char field it inserts 8000 bytes.
        DELETE FROM [DummyTransaction]
        SELECT @Counter = @Counter + 1

        --Every 1,000 cycles tell the user what is going on
        IF ROUND( @Counter , -3 ) = @Counter
        BEGIN
            PRINT 'Padded '+LTRIM( CAST( @Counter*8 AS VARCHAR(10) ) )+_
		'K @ '+LTRIM( CAST( DATEDIFF( ss, @StartTime, GETDATE() ) _
		AS VARCHAR(10) ) )+' seconds';
        END
    END

    --See if a trunc of the log shrinks it.
    EXEC( @TruncLog )

END
PRINT ''

SELECT @StringData = 'Final Size of ' + db_name() + ' LOG is ' +
   CONVERT(VARCHAR(30),size) + ' 8K pages or ' + 
   CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles 
WHERE name = @LogicalFileName;

PRINT @StringData
PRINT ''

DROP TABLE [DummyTransaction];
PRINT '*** Perform a full database backup ***'

SET NOCOUNT OFF

END

GO

Conclusion

I hope that this article might be helpful to you. Enjoy!

History

  • 25th July, 2009: Initial post

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



Comments and Discussions

 
GeneralMy vote of 5 Pin
Assil13-Dec-15 2:45
professionalAssil13-Dec-15 2:45 
GeneralRe: My vote of 5 Pin
Md. Marufuzzaman13-Dec-15 3:52
professionalMd. Marufuzzaman13-Dec-15 3:52 
GeneralMy vote of 5 Pin
Kanasz Robert23-Sep-12 3:20
professionalKanasz Robert23-Sep-12 3:20 
GeneralRe: My vote of 5 Pin
Md. Marufuzzaman23-Sep-12 6:11
professionalMd. Marufuzzaman23-Sep-12 6:11 
GeneralGood one Pin
thatraja15-Jan-10 22:18
professionalthatraja15-Jan-10 22:18 
GeneralRe: Good one Pin
Md. Marufuzzaman15-Jan-10 22:32
professionalMd. Marufuzzaman15-Jan-10 22:32 
GeneralGreat one Pin
Abhishek Sur3-Aug-09 22:06
professionalAbhishek Sur3-Aug-09 22:06 
GeneralRe: Great one Pin
Md. Marufuzzaman4-Aug-09 11:47
professionalMd. Marufuzzaman4-Aug-09 11:47 
GeneralNice one Pin
Abhijit Jana28-Jul-09 11:46
professionalAbhijit Jana28-Jul-09 11:46 
GeneralRe: Nice one Pin
Md. Marufuzzaman29-Jul-09 2:08
professionalMd. Marufuzzaman29-Jul-09 2:08 
GeneralBe careful with loss data in up-and-running data file Pin
Pham Dinh Truong27-Jul-09 16:45
professionalPham Dinh Truong27-Jul-09 16:45 
GeneralRe: Be careful with loss data in up-and-running data file Pin
Md. Marufuzzaman28-Jul-09 4:01
professionalMd. Marufuzzaman28-Jul-09 4:01 
GeneralNeed to be careful Pin
Rob Eckert25-Jul-09 7:43
Rob Eckert25-Jul-09 7:43 
GeneralRe: Need to be careful Pin
Md. Marufuzzaman25-Jul-09 8:45
professionalMd. Marufuzzaman25-Jul-09 8:45 
GeneralSql Server 2008 Pin
Member 145866025-Jul-09 0:16
Member 145866025-Jul-09 0:16 
GeneralRe: Sql Server 2008 Pin
bindum3125-Jul-09 0:34
bindum3125-Jul-09 0:34 
GeneralRe: Sql Server 2008 Pin
Member 145866025-Jul-09 0:48
Member 145866025-Jul-09 0:48 
GeneralRe: Sql Server 2008 [modified] Pin
Md. Marufuzzaman25-Jul-09 2:29
professionalMd. Marufuzzaman25-Jul-09 2:29 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.