ChatGPT解决这个技术问题 Extra ChatGPT

The transaction log for the database is full

I have a long running process that holds open a transaction for the full duration.

I have no control over the way this is executed.

Because a transaction is held open for the full duration, when the transaction log fills, SQL Server cannot increase the size of the log file.

So the process fails with the error "The transaction log for database 'xxx' is full".

I have attempted to prevent this by increasing the size of the transaction log file in the database properties, but I get the same error.

Not sure what I should try next. The process runs for several hours so it's not easy to play trial and error.

Any ideas?

If anyone is interested, the process is an organisation import in Microsoft Dynamics CRM 4.0.

There is plenty of disk space, we have the log in simple logging mode and have backed up the log prior to kicking off the process.

-=-=-=-=- UPDATE -=-=-=-=-

Thanks all for the comments so far. The following is what led me to believe that the log would not grow due to the open transaction:

I am getting the following error...

Import Organization (Name=xxx, Id=560d04e7-98ed-e211-9759-0050569d6d39) failed with Exception:
System.Data.SqlClient.SqlException: The transaction log for database 'xxx' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

So following that advice I went to "log_reuse_wait_desc column in sys.databases" and it held the value "ACTIVE_TRANSACTION".

According to Microsoft: http://msdn.microsoft.com/en-us/library/ms345414(v=sql.105).aspx

That means the following:

A transaction is active (all recovery models). • A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.

• A transaction is deferred (SQL Server 2005 Enterprise Edition and later versions only). A deferred transaction is effectively an active transaction whose rollback is blocked because of some unavailable resource. For information about the causes of deferred transactions and how to move them out of the deferred state, see Deferred Transactions.

Have I misunderstood something?

-=-=-=- UPDATE 2 -=-=-=-

Just kicked off the process with initial log file size set to 30GB. This will take a couple of hours to complete.

-=-=-=- Final UPDATE -=-=-=-

The issue was actually caused by the log file consuming all available disk space. In the last attempt I freed up 120GB and it still used all of it and ultimately failed.

I didn't realise this was happening previously because when the process was running overnight, it was rolling back on failure. This time I was able to check the log file size before the rollback.

Thanks all for your input.

re "...and have backed up the log"....if the database is in Simple mode, you wouldn't be able to backup up the log, log backups are not applicable for simple mode. Is it bulk-logged?
I backed up the entire DB and shrunk it which resulted in the Log shrinking to 1MB. I then increased the size of the Log file to 20GB initially, and now 30 GB.

m
meh-uk

To fix this problem, change Recovery Model to Simple then Shrink Files Log

1. Database Properties > Options > Recovery Model > Simple

2. Database Tasks > Shrink > Files > Log

Done.

Then check your db log file size at Database Properties > Files > Database Files > Path

To check full sql server log: open Log File Viewer at SSMS > Database > Management > SQL Server Logs > Current


No, that doesn't fix the problem. The problem was that the log file grew during a long running process until it ran out of disk space. It was corrected by temporarily moving the log file to another drive that had 1TB of space available. You can't shrink the log file while a long running process - that is holding open a transaction - is in progress. That process was solely responsible for the file growth.
As @Jimbo already said, this does not fix the OP's problem. It may free up some currently unused space, but as soon as a long transaction is running again, the space will be taken up again (and probably fail even earlier)
That didn't fix the problem. My log have only 500 bytes. I think this problem started after I have done a backup yesterday.
This is definitely the fix if you have a couple megabytes left to spare on the full drive.
It might fix A problem. It does not fix the problem reported in the OP.
P
PretoriaCoder

I had this error once and it ended up being the server's hard drive that run out of disk space.


Read the OP's updates. This turned out to be the issue.
That was the case for me as well. I found a completely unrelated program had created a ridiculously huge log file (over 400 GB, holy smoke!) and filled up all remaining hard drive space! Once I cleared that up, the issue went away.
M
Mike Henderson

Is this a one time script, or regularly occurring job?

In the past, for special projects that temporarily require lots of space for the log file, I created a second log file and made it huge. Once the project is complete we then removed the extra log file.


I wouldn't say it's a one-time job, but it is rare that we have to do it. I didn't create a second log file, but I did increase the initial size of my current log file to 30GB. During my last run it was set to 20GB and it still failed.
Would having a second log file be better somehow than having one big one given that I only have one drive to work with?
As I recall now, the additional file mostly enabled us to access another, bigger drive.
How big is the data being imported? If you're importing 30 GB of data, you're log file may need to be at least as big.
Log size is the key. The current task failed again and I couldn't believe my eyes when I saw the size of the log file at the point that it failed. It only processed half of the accounts and was already at 53GB. It looks like I'm going to have to clear somewhere in the vicinity of another 60-70GB to be able to complete this process.
R
Ross McNab

Do you have Enable Autogrowth and Unrestricted File Growth both enabled for the log file? You can edit these via SSMS in "Database Properties > Files"


Yes. It's set to autogrow 10%, unrestricted. The issue is that autogrow won't work while there is an open transaction.
Do you have any idea how big the transaction will be? try to set the Transaction log size bigger than that estimation, anyway if disk allocation is not an issue, allocate at the beginning plenty space, for data and log as well. It improves performance. Don't us autogrow by 10%, do it by some few of GB, so performance will be good enough.
SQL Server will autogrow the log during a transaction if it needs more space to complete that transaction.
Hi Ross, I've provided my logic for thinking the open transaction is preventing the growth in an update to the question. Am I incorrect in my reasoning?
@Jimbo SQL Server doesn't require you to have it reserved. If you have autogrow, SQL Server does the autogrow during the transaction. Having it big enough it can save lot of time, but shouldn't affect the process.
B
Brian

This is an old school approach, but if you're performing an iterative update or insert operation in SQL, something that runs for a long time, it's a good idea to periodically (programmatically) call "checkpoint". Calling "checkpoint" causes SQL to write to disk all of those memory-only changes (dirty pages, they're called) and items stored in the transaction log. This has the effect of cleaning out your transaction log periodically, thus preventing problems like the one described.


Unfortunately I have no control over the way the process is performed. Dynamics CRM is a Microsoft application and the organisation import process is part of that application.
j
jww

The following will truncate the log.

USE [yourdbname] 
GO

-- TRUNCATE TRANSACTION LOG --
DBCC SHRINKFILE(yourdbname_log, 1)
BACKUP LOG yourdbname WITH TRUNCATE_ONLY
DBCC SHRINKFILE(yourdbname_log, 1)
GO

-- CHECK DATABASE HEALTH --
ALTER FUNCTION [dbo].[checker]() RETURNS int AS BEGIN  RETURN 0 END
GO

Hey Pinal, this functionality was removed completely from SQL Server 2008 and above: brentozar.com/archive/2009/08/…
With later versions, try BACKUP LOG TO DISK=N'NUL:'
J
James K. Lowden

If your database recovery model is full and you didn't have a log backup maintenance plan, you will get this error because the transaction log becomes full due to LOG_BACKUP.

This will prevent any action on this database (e.g. shrink), and the SQL Server Database Engine will raise a 9002 error.

To overcome this behavior I advise you to check this The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP that shows detailed steps to solve the issue.


L
Linh Dao

I met the error: "The transaction log for database '...' is full due to 'ACTIVE_TRANSACTION' while deleting old rows from tables of my database for freeing disk space. I realized that this error would occur if the number of rows to be deleted was bigger than 1000000 in my case. So instead of using 1 DELETE statement, i divided the delete task by using DELETE TOP (1000000).... statement.

For example:

instead of using this statement:

DELETE FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())

using following statement repeatedly:

DELETE TOP(1000000) FROM Vt30 WHERE Rt < DATEADD(YEAR, -1, GETDATE())

S
Saadat

Try this:

USE YourDB;  
GO  
-- Truncate the log by changing the database recovery model to SIMPLE.  
ALTER DATABASE YourDB
SET RECOVERY SIMPLE;  
GO  
-- Shrink the truncated log file to 50 MB.  
DBCC SHRINKFILE (YourDB_log, 50);  
GO  
-- Reset the database recovery model.  
ALTER DATABASE YourDB
SET RECOVERY FULL;  
GO 

I hope it helps.


This was one of the first things attempted and it's even mentioned in the question text. This does not solve the problem of a single open transaction filling up the log and using all of the available disk space. This entire process took place in simple mode. However, you are one of many people who offered this exact answer having not read the question...
E
Elvin Aghammadzada

adding up to the answers above, I also want to mention that, if possible, u can also free up the server to fix this issue. If the server is already full due to the database overflow, u can delete some unnecessary files from the SERVER where ur DB is built upon. At least this temporarily fixes the issue and lets you to query the DB


True. In my case the server didn't have sufficient resources so we had to plug in an external drive, which did the trick. Mostly, this was a difficult to investigate issue because we were surprised to see the log needed so much space, the process in question was black boxed and the whole thing was rolled back before we had a chance to check on it.
A
Amir Astaneh

My problem solved with multiple execute of limited deletes like

Before

DELETE FROM TableName WHERE Condition

After

DELETE TOP(1000) FROM TableName WHERECondition

R
Rohit Reddy

The answer to the question is not deleting the rows from a table but it is the the tempDB space that is being taken up due to an active transaction. this happens mostly when there is a merge (upsert) is being run where we try to insert update and delete the transactions. The only option is is to make sure the DB is set to simple recovery model and also increase the file to the maximum space (Add an other file group). Although this has its own advantages and disadvantages these are the only options.

The other option that you have is to split the merge(upsert) into two operations. one that does the insert and the other that does the update and delete.


If you read the question you'd know that the DB was already in simple recovery mode as this was happening. This does not help when you have one long running open transaction. The file continues to grow until the transaction is committed or rolled back. Read the first line of the question "I have a long running process that holds open a transaction for the full duration."
n
narawit1601

Here's my hero code. I've faced this problem. And use this code to fix this.

 USE master;

    SELECT 
        name, log_reuse_wait, log_reuse_wait_desc, is_cdc_enabled 
    FROM 
        sys.databases 
    WHERE 
        name = 'XX_System';

    SELECT DATABASEPROPERTYEX('XX_System', 'IsPublished');


    USE XX_System;
    EXEC sp_repldone null, null, 0,0,1;
    EXEC sp_removedbreplication XX_System;


    DBCC OPENTRAN;
    DBCC SQLPERF(LOGSPACE);
    EXEC sp_replcounters;



    DBCC SQLPERF(LOGSPACE);

Please put your answer always in context instead of just pasting code. See here for more details.
C
Community

Try this:

If possible restart the services MSSQLSERVER and SQLSERVERAGENT.