How to determine SQL Server database transaction log usage

By:   |   Comments (21)   |   Related: 1 | 2 | 3 | 4 | More > Database Administration


One crucial aspect of all databases is the transaction log.  The transaction log is used to write all transactions prior to committing the data to the data file.  In some circumstances the transaction logs can get quite large and not knowing what is in the transaction log or how much space is being used can become a problem.  So how to you determine how much of the transaction log is being used and what portions are being used?


In most databases, the transaction log is generally just one (ldf) file, but inside the overall transaction log is a series of virtual log files as depicted below.

SQL Server viritual log files

source (SQL Server Books Online)

The way the transaction log is used is that each virtual log file is written to and when the data is committed and a checkpoint occurs the space becomes useable again.  Although this does depend on your database recovery model, whether you are using replication and your backup processing.  If there are no additional virtual logs available, SQL Server will grow the transaction log, based on your database settings, to accommodate the additional space that is required. 

SQL Server vlfs

source (SQL Server Books Online)

The use of the file and the virtual logs all depends on how the database is used and other settings you have enabled in your database.  If you are publishing data from this database or if the database is set to the Full or Bulk-Logged recovery mode, this will also affect whether the process loops back to the beginning of the file, if it uses the next available virtual log or it if needs to grow the transaction log and create additional virtual logs.

Get space used by transaction logs using DBCC SQLPERF(logspace)

One command that is extremely helpful in understanding how much of the transaction log is being used is DBCC SQLPERF(logspace).  This one command will give you details about the current size of all of your database transaction logs as well as the percent currently in use.  Running this command on a periodic basis will give you a good idea of how the transaction logs are being used and also give you an idea on how large they should really be.  This is a question that is often asked by a lot of people that use SQL Server and as you run this you will find out there is no perfect answer it all depends on a lot of criteria such as:

  • recovery model
  • size of the transactions
  • how large your tables are and therefore how much space is needed for index maintenance
  • how frequently you run transaction log backups
  • whether the database is published or not
  • etc.

To run this command issue the following in a query window:

DBCC SQLPERF(logspace)

This is sample output:

sample output

From here we can see the size of the transaction logs as well as how much space is being used.  The current log space used will tell you how much of the transaction log is being used.  If this percentage is high and the size of the log is quite big it is probably due to one of the items listed above. 

Getting information about SQL Server virtual logs using DBCC LOGINFO

The next command to look at is DBCC LOGINFO. This will give you information about your virtual logs inside your transaction log.  The primary thing to look at here is the Status column.  Since this file is written sequentially and then looped back to the beginning, you want to take a look at where the value of "2" is in the output.  This will tell you what portions of the log are in use and which are not in use Status = 0.  Another thing to keep an eye on is the FSeqNo column. This is the virtual log sequence number and the latest is the last log.  If you keep running this command as you are issuing transactions you will see these numbers keep changing.

To run this command issue the following in a query window:


This is sample output:

sample output

If we now run a transaction log backup such as the following:

BACKUP LOG DBUtil TO DISK = 'C:\Backup\DBUtil.trn'

and then rerun the command you will see how the Status=2 has changed in the file. The last entry is still marked as in use, but the previous entries have been reset to 0.

sample output

Finding open SQL transactions using DBCC OPENTRAN

Another command to look at is DBCC OPENTRAN. This will show you if you have any open transactions in your transaction log that have not completed or have not been committed.  These may be active transactions or transactions that for some reason never completed.  This can provide additional information as to why your transaction log is so big or why you may not be able to shrink the transaction log file.  This will show you both open transactions as well any un-replicated transactions if the database is published.

To run this command issue the following in a query window:


This is sample output:

dbcc opentran output

Now that you have an idea how much of your transaction log is being used and what is being used you can start to make some decisions on how large the transaction log should be.  One thing you should try to do is find that optimum size in order to eliminate having to shrink and grow the transaction log on a constant basis.  As with all database and server activity it is best to minimize the overhead as much as you can and this is one of those areas that you can somewhat manage by creating and maintaining the optimum transaction log size.

Next Steps
  • Make sure you are using the correct backup strategy based on your recovery model
  • If you need to shrink your transaction log file take a look at DBCC SHRINKFILE.
  • Here are some other tips regarding transaction log space.

sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip

About the author
MSSQLTips author Greg Robidoux Greg Robidoux is the President and founder of Edgewood Solutions, a technology services company delivering services and solutions for Microsoft SQL Server. He is also one of the co-founders of Greg has been working with SQL Server since 1999, has authored numerous database-related articles, and delivered several presentations related to SQL Server. Before SQL Server, he worked on many data platforms such as DB2, Oracle, Sybase, and Informix.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips

Comments For This Article

Monday, December 19, 2022 - 2:59:43 AM - Lancyo Back To Top (90764)
I was wondered why StartOffset and FseqNo do not have the same order. This disorder seems append when autogrow was made. Do you have any information about that?
Is it possible to realign StartOffset and FseqNo to make the log file more sequential?

Wednesday, December 8, 2021 - 10:10:48 AM - Greg Robidoux Back To Top (89548)
Hi Veggen,

If the database is in SIMPLE recovery model you don't need to do transaction log backups.

You could look at frequency of database CHECKPOINTs, see this article for more info.

Do you have large transactions that are running or are they all very small transactions?

Also, you mention the transaction log is several GBs. How large is it compared to the data files?


Wednesday, December 8, 2021 - 8:44:25 AM - Veggen Back To Top (89545)
The transaction log (ldf-file) contains virtual logs.
Each virtual log has a growing sequencenr.
Each virtual log has a status column. Status=0 (committed to db - truncated/available log_space) or Status=2 (in use)

Doing av backup of transaction log, Changes the status=2 (in use) on the virtual logs, to status=0
Only the virtual log having the latest/largest sequence nr will still have status=2. (in use).

In a system we use Simple recovery model. I am not sure if the above regarding statuses only applies for full recovery model or is still relevant for Simple Model???

Problem: A few times we observe the logfiles become really big, even if the daily scheduled database and logfile backup has been performed.

Could someone please try to explain how the logfiles can grow this large (several Gb) when using a simple recovery model?

1. Is it just too many transaction being fired against the database in a short time,
causing the ldf to fill up faster than the the transactions are being committed to the datafile?
I struggle to understand such load could happen for real.

Or could a single - for some reason "stuck" transaction cause this?

Would a more frequent backup of the transaction log (4 times a day) help, freeing up available log space? (not sure if this applies for Simple recovery model)

I want to create some sort of a Sql-query that could generate a warning or indicate something is currently wrong.

Regards Veggen

Friday, October 23, 2020 - 3:56:14 PM - Greg Robidoux Back To Top (86691)
Thanks Konstantin. I just updated the article and remove the NO_LOG backup. Probably not a good idea to include to make sure someone doesn't do something didn't intend to do.


Friday, October 23, 2020 - 11:24:12 AM - Konstantin Momchilov Back To Top (86689)
Nice post as usual just update this statement has not been in use since SQL 2005 BACKUP LOG DBUtil WITH NO_LOG those days backup to disk = 'NUL' is used

Wednesday, May 15, 2019 - 6:21:31 AM - Tamil Selvan Back To Top (80083)

What is the Advantage and Disadvantage for determine the SQL server Database Transaction Log Usage?

Thank You,


Wednesday, November 21, 2018 - 9:08:42 PM - michele salerno Back To Top (78305)

 Good afternoon,

Excellent article.  Question:

when I run dbcc loginfo, I get this line

recovery unit =0 

filed =2 

Filesize = 39594229760 

Startoffset = 10737418240 

FSeqNo = 225 

Status = 2 

Parity = 128 

CreateLSN = 70000062929500020


how do I find the process that is hlding this log in mssql ?

In advance, thank you for your help in this matter



Thursday, June 9, 2016 - 10:56:02 AM - bala Back To Top (41652)


I've the same issue but its not resolved. If transaction log grows bigger,

My Scenario -

1. Daily rebuild index from maintainance plan.

2. Taking backup(.BAK)&Transactionlog (.trn) both from same maintainance plan. 


Now i've changed  my maintainance plans, created two maintainance plans, one for backup(.BAK) and and other for transactionlog(.trn),

it resolved my issue..




Wednesday, January 27, 2016 - 5:26:14 PM - Rose Back To Top (40517)


 Thanks. Very explanatory...

Tuesday, January 20, 2015 - 6:41:18 AM - pl80 Back To Top (35995)

This doesn't answer one key question: "which transaction has caused a high log usage?".  DBCC OPENTRAN is not the answer, as the longest running transaction might not be the one using up the most log space.  Is there a way to pinpoint the transaction responsible?  Tx.

Monday, September 16, 2013 - 2:08:07 PM - Greg Robidoux Back To Top (26821)

@Ed - take a look a this tip about VLFs and performance:

Thursday, August 22, 2013 - 1:42:12 PM - Ed Back To Top (26438)

Hi, I understand how db engine uses tran log, but I do not understand how would increased number of VLFs could affect transaction log backups or recovery performance?

Friday, August 9, 2013 - 11:12:04 AM - Greg Robidoux Back To Top (26186)

@Sara - it is hard to say because it depends on how large the rows are that are being inserted, so the transaction log may need a lot of space even if you have only have one table.

A general rule is to make the transaction log around 20-25% of the data file, but again this all depends on the database and how large the largest table is that you have in your database.

Run the DBCC SQLPERF(logspace) command to see how much space is actually being used and also compare this to your overall database size.

You should also check the RECOVERY MODEL of the database.  It coudl be in FULL recovery and if you are not doing transaction log backups this will just keep growing.  See this for more info:

Friday, August 9, 2013 - 6:44:45 AM - sara Back To Top (26178)


Log Size 1008.242MB for a db witch is having a single table with approximately 30 individual insertion is reasonable?

Monday, June 10, 2013 - 11:29:37 PM - Special Back To Top (25382)

What are reason of Model database become full ?

Saturday, January 5, 2013 - 4:50:08 AM - hassan Back To Top (21269)

Issue with Query Execution speed!

Is it cause database execution to become slow?

when I execute "dbcc loginfo" for my database I have 307 rows! Does it mean somthing is wrong? and cause to reduce execution speed?

Thursday, October 4, 2012 - 5:24:27 PM - Greg Robidoux Back To Top (19793)

@Chiranjib Saha Roy - based on your scenario if transaction logs were missing and you only had full backups and transaction log backups the best you will be able to do is to restore your full backup and all of the transaction logs that you have there were created in sequence.

So if you have 12 transaction log backups and #6,7,8 are missing the best you an do is restore #1,2,3,4,5 and recover only the data up to that point.

Hope this answers your question.

Thursday, October 4, 2012 - 2:49:58 PM - Chiranjib Saha Roy Back To Top (19791)

Issue with restoration

Suppose my database has configured with full recovery model . Backed up full backup @ 1 AM every night and after every create tranaction log backup 30 mins

Scenario:- Database has been corrupted at 10:01 AM.  DBA saw that last transaction log backup was there before that 7 transaction logs were missing. how DBA restore the database with no data lose.

any clarification require, please feel free to contact 

Tuesday, May 29, 2012 - 6:54:19 PM - Yana Gudev Back To Top (17709)

Thanks!  How do I go about checking how much space does individual transaction takes in a log?


Wednesday, April 18, 2012 - 9:39:16 PM - Sri Back To Top (16995)

Thank you for this post Greg!

Very straightforward and clear and not to mention helpful!


Thursday, April 5, 2012 - 8:10:50 AM - Bhupendra Back To Top (16788)

awosome.. it really help.. great  article

get free sql tips
agree to terms