Verbose SQL Server Agent Logging

By:   |   Comments (7)   |   Related: > SQL Server Agent


Problem

How often have you reviewed a SQL Server Agent Job's history to determine why a job failed only to be confronted with a minimal level of detail? When a job failure occurs it is imperative to be able to quickly identify the root cause in order to determine the correct remediation. As such, what are some of the options to consolidate the logging on a per job basis? How can I setup this functionality for my key SQL Server Agent Jobs?

Solution

Before we get too far, let's start with an example to paint the picture of a backup job failure, then jump into the mechanics of how to capture the complete data for troubleshooting purposes.

Here is a sample SQL Server Agent Job Failure Notification.

job step failure

Here are the details of Log File Viewer for the Backup Job Failure.

job step failure in log viewer

As you can see, neither of these gives much information about why it failed.

Configuring the SQL Server Agent Job to Capture More Information

For the sake of this tip I'll be using the SQL Server Management Studio (SSMS) GUI. Let's jump into how to configure the job step.

SQL Server Agent Advanced tab

Open up the SQL Agent job you want to add verbose logging to and navigate to the Steps page. If you have multiple steps then edit the first step, in particular we are going to make changes on the Advanced tab of the step.

sql agent job step

Specify the Output File

Using the Output file field, enter an existing file path (I tend to use the log directory for SQL Server so as to remember where these log files are located). Append to the file path either an existing or non-existing file to use for the verbose logging. If the file does not exist one will be created for you the first time the job is run:

sql agent job step

Output File Contents

Based upon the screen shot above this is the contents of the DB_Backup_Log.txt file after I re-execute the job:

sql agent job output

Troubleshooting

I was able to determine that the reason the job failed. Let's assume I correct the name of the database in step 1 and also change the On Success action to "Go To Next Step".

Adding a second job step

I will now add a second step to the job that restores a copy of the database to a different database name if step 1 succeeds. Here are what the job step screens look like for that step.

sql agent job step
sql agent job step

Final Output

As you can see, I checked the Append output to existing file checkbox on the second step above. Doing so will append the results of the second step to the same file that displays the results of the first step.

Now, when I execute the job, the verbose log looks like this showing both the backup and restore operations.

verbose logging multiple steps

Notes about Append output to existing file checkbox

  • If you want to maintain a historical record of the job history, check this box for each step. The file will not be over-written each time the job executes.
  • If you are only interested in keeping a log of the last execution for the job then leave the checkbox unchecked.
  • When I have a multiple-step job (like the example I've provided you) I log all steps against a single file. I check the 'Append output to existing file' checkbox for all steps in the job that execute after the initial step. This results in a log file with all of the job steps from the last job execution. Each time the first step executes (each time the job is kicked-off) the file will be overwritten, so I have a record of the last set of output.
Next Steps
  • There are more opportunities for utilizing this functionality than just debugging failed jobs. I have situations where customers want the output of the queries periodically dropped to a network share for the purposes of mail merging with Microsoft Office documents or similar processes. By using the process outlined in this tip, I can easily set up a job that runs a query on a regular basis and stores the result set on a shared drive accessible to the users. This is a very convenient aspect of SQL Server Agent and is often overlooked.
  • This should work fine with SQL Server 2005 and later.  This was last tested on SQL Server 2017.
  • Check out all of the valuable SQL Server Agent tips!


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Tim Ford Tim Ford is a Senior Database Administrator with MindBody.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Wednesday, September 9, 2020 - 5:56:24 AM - Thomas Franz Back To Top (86439)
If you want to delete the job output files, you could have a look onto
https://www.mssqltips.com/sqlservertip/1618/using-the-forfiles-command-to-delete-sql-server-backups/
(when it can delete an backup file, it can delete a log file too).

Or you can create a command line job (step) with the following command (it deletes all *.txt files older than 30 days from your default log directory):
cmd /q /c "For /F "tokens=1 delims=" %v In ('ForFiles /P "$(ESCAPE_SQUOTE(SQLLOGDIR))" /m *_*_*_*.txt /d -30 2^>^&1') do if EXIST "$(ESCAPE_SQUOTE(SQLLOGDIR))"\%v echo del "$(ESCAPE_SQUOTE(SQLLOGDIR))"\%v& del "$(ESCAPE_SQUOTE(SQLLOGDIR))"\%v"

Tuesday, January 29, 2019 - 4:39:55 AM - Thomas Franz Back To Top (78900)

The following statement sets the output file for every job / step on your SQL server. The files will be stored in the usual log dircetory, the filename will be _Step__.log (e.g. Log_DatabaseBackup_Step_1_20190129_103700.txt or Index_Optimize_Step_4_20190112_100000.txt). The statement can be executed multiple times (e.g. after creating a new job or renaming an existing one)

-- Caution: output_file_name is a nvarchar(200), if the JobName (and / or the LogPath) is to long, the statement will fail
UPDATE sj
   SET output_file_name = c2.output_file_name
  FROM msdb.dbo.sysjobsteps sj
 INNER JOIN msdb.dbo.sysjobs AS s
    ON s.job_id = sj.job_id
 CROSS JOIN (SELECT CAST(SERVERPROPERTY('ErrorLogFileName') AS NVARCHAR(1024)) ErrorLogFileName,
                    ISNULL(CAST(SERVERPROPERTY('ProductMajorVersion') AS VARCHAR(10)), 0) ProductMajorVersion
            ) calc
 CROSS APPLY (SELECT CASE WHEN calc.ProductMajorVersion >= 11
                          THEN CAST('$(ESCAPE_SQUOTE(SQLLOGDIR))\' AS NVARCHAR(1024))
                          ELSE SUBSTRING(calc.ErrorLogFileName, 1, LEN(calc.ErrorLogFileName) - CHARINDEX('\', REVERSE(calc.ErrorLogFileName))) + '\'
                     END
                   + REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
                             CASE WHEN s.name LIKE 'DatabaseBackup - USER_DATABASES - LOG%'
                                  THEN 'Log_DatabaseBackup'
                                  ELSE s.name
                             END, ' ', '_'), '/', '-'), '\', '-'), ':', '_'), '?', '_'), '*', '_')
                   + '_Step_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(STRTDT))_$(ESCAPE_SQUOTE(STRTTM)).txt' AS output_file_name
               ) c2
 WHERE sj.output_file_name <> c2.output_file_name
    OR sj.output_file_name IS NULL


Wednesday, January 28, 2015 - 10:13:26 PM - DVP Rao Back To Top (36094)

Hi:

This is useful tip. Thank you.

But How do we do this for DB Backup jobs created as a maintenance plan ? for native sql backup ?

It looks like the output is getting truncated ...


Thursday, February 7, 2013 - 4:28:24 AM - zambig Back To Top (21964)

Do you think this can be adapted to scripts or SPs as well?

I have the same need (to save messages) but I'm not using a Job.

Thanks


Wednesday, January 23, 2013 - 6:35:21 PM - Ludwig Guevara Back To Top (21665)

Hi,

Not too often. When a job fails, it's better to receive an email with the issue, instead of going every time to check the history of the failure job. I Add to most of the jobs an extra step "If the job Fails" to monitor it. Create a store procedure with the below code, looking for last know job issue, parse it to remove SQL stuff (see replace) and send an email.

SELECT TOP 1 CONVERT( varchar(MAX),
            REPLACE( REPLACE( REPLACE( REPLACE( REPLACE( REPLACE( REPLACE(
            REPLACE( REPLACE( jh.Message, '--','' ), '[SQLSTATE 01000]', '[SQL]' ), 'Progress', 'Prg' )
                    , 'Started', 'Sta' ), 'End Prg', 'EPrg' ), 'Executing query', 'Eqry' )
                    , 'Source', 'Src' ), '(Message 0)', '' ), '[SQLSTATE 40001] (Error 1205)', 'FStp' ) )
        FROM msdb.dbo.sysjobs j
        INNER JOIN msdb.dbo.sysjobhistory jh ON j.job_id=jh.job_id
        WHERE run_status = 0
            AND run_date = CONVERT(varchar, getdate(),112)
            AND run_time > REPLACE( CONVERT(varchar, DATEADD( n, - @Minutes, getdate() ),108), ':','')
            AND j.job_id=@JobID )


Monday, January 14, 2008 - 8:02:42 PM - timmer26 Back To Top (219)

110!  Wow I thought I had it bad with being the only DBA in charge of 800+ databases on 70 instances!  I am probably in the same situation you find yourself in - too much information and not enough time to read up on it all.  I've not even cracked any of the shrink wrap on the CTPs, nor have I sat in on any 2008 sessions at PASS, TechEd, or Connections.  I have run across a work-around for cleanup when it comes deleting backups created by a custom DB backup job; similar to the "Delete Files Older Than XX hours/days/etc." from SQL 7.0 and 2000.  If you're interested I can throw that into a tip for February as it is a little too much info to throw into a forum.

Unfortunately, the relationship between Job and Output File is a 1::1.  Short of rescripting the job prior to each execution I do not see a way to maintain separate log files for each time the job is run.  You've raised a good point and it may force me to unwrap that latest CTP.  I'll post back here if I come up with anything for 2005 or 2008.

 Good luck with those 110 instances.

 - Tim


Monday, January 14, 2008 - 11:25:13 AM - greganson Back To Top (217)

Very nice article, nice formatting, etc.

the only thing missing from 2005 and all related articles regarding output is the ability to cleanup files after a job or job step completes.  For instance, I would like to enable verbose logging for each job, for each time the job runs (a new file with timestamp is created), and then have the job/step delete anything older than 3 weeks.  And I would like to be able to configure this within the same GUI (or drill down) that is in the maint plan wizard.  This still seems to be a very inconvenient function missing from 2005...and something that you shouldn't have to do with another job with a cleanup step, etc.  I want to build jobs, configure what I want to see out of them, for the time that I want to see them and then let the job maintain space on my drives...set it and forget it mentality...just like we had in SQL 2000. 

 Do you know if this is better in SQL 2008?  I haven't had a chance to load the CTP code yet.  I'm too busy monitoring space on my other 110 SQL instances!!















get free sql tips
agree to terms