Learn more about SQL Server tools

mssqltips logo
 

Tutorials          DBA          Dev          BI          Career          Categories          Webcasts          Whitepapers          Today's Tip          Join

Tutorials      DBA      Dev      BI      Categories      Webcasts

DBA    Dev    BI    Categories

 

Verbose SQL Server Agent Logging


By:   |   Read Comments (5)   |   Related Tips: More > SQL Server Agent

New Whitepaper > 10 Ways We Can Steal Your Data >> download now


Problem

How often have you reviewed a SQL Server Agent Job's history to determine why a job failed only to be confronted with a minimal level of detail?  When a job failure occurs it is imperative to be able to quickly identify the root cause in order to determine the correct remediation.  As such, what are some of the options to consolidate the logging on a per job basis?  How can I setup this functionality for my key SQL Server Agent Jobs?

Solution

Before we get too far, let's start with an example to paint the picture of a backup job failure, then jump into the mechanics of how to capture the complete data for troubleshooting purposes.  Here is a sample failure:

SQL Server Agent Job Failure Notification


 

Log File Viewer - Backup Job Failure


 

Configuring the SQL Server Agent Job

For the sake of this tip I'll be using the SQL Server Management Studio (SSMS) GUI.  I'll then show you the pertinent T-SQL.  Let's jump into how to configure the job step.

SQL Server Agent Advanced tab - Open up the SQL Agent job you want to add verbose logging to and navigate to the Steps page.  If you have multiple steps then edit the first step, in particular we are going to make changes on the Advanced tab of the step.

 
Specifying the output file - Using the Output file field, enter an existing file path (I tend to use the log directory for SQL Server so as to remember where these log files are located.) Append to the file path either an existing or non-existing file to use for the verbose logging. If the file does not exist one will be created for you the first time the job is run:

 
Output file contents - Based upon the screen shot above this is the contents of the DB_Backup.Log file after I re-execute the job:

 
Troubleshooting - I was able to determine that the reason the job failed was because the dump device did not exist. I see that the device name is incorrect. Let's assume I correct the name of the backup device in step 1 and also change the On success action to "Go To Next Step".
 
Adding a second step - I will now add a second step to the job that restores a copy of the database to a different database name if step 1 succeeds. Here are what the job step screens look like for that step.



 
Final output - As you can see, I checked the Append output to existing file checkbox on the second step. Doing so will append the results of the second step to the same file that displays the results of the first step. Now, when I execute the job the verbose log looks like this:

 

A few notes about the Append output to existing file checkbox

  • If you want to maintain a historical record of the job history, check this box for each step. The file will not be over-written each time the job executes.
  • If you are only interested in keeping a log of the last execution for the job then leave the checkbox unchecked.
  • When I have a multiple-step job (like the example I've provided you) I log all steps against a single file.  I check the 'Append output to existing file' checkbox for all steps in the job that execute after the initial step. This results in a log file with all of the job steps from the last job execution. Each time the first step executes (each time the job is kicked-off) the file will be overwritten, so I have a record of the last set of output.
Next Steps
  • There are more opportunities for utilizing this functionality than just debugging failed jobs.  I have situations where customers want the output of the queries periodically dropped to a network share for the purposes of mail merging with Microsoft Office documents or similar processes.  By using the process outlined in this tip, I can easily set up a job that runs a query on a regular basis and stores the result set on a shared drive accessible to the users.  This is a very convenient aspect of SQL Server Agent and is often overlooked.
  • Check out all of the valuable SQL Server Agent tips!


Last Update:


signup button

next tip button



About the author
MSSQLTips author Tim Ford Tim Ford is a Senior Database Administrator and SQL Server MVP. He has been working with SQL Server since 1999.

View all my tips
Related Resources





Post a comment or let the author know this tip helped.

All comments are reviewed, so stay on subject or we may delete your comment. Note: your email address is not published. Required fields are marked with an asterisk (*).

*Name    *Email    Notify for updates 


SQL tips:

*Enter Code refresh code     



Wednesday, January 28, 2015 - 10:13:26 PM - DVP Rao Back To Top

Hi:

This is useful tip. Thank you.

But How do we do this for DB Backup jobs created as a maintenance plan ? for native sql backup ?

It looks like the output is getting truncated ...


Thursday, February 07, 2013 - 4:28:24 AM - zambig Back To Top

Do you think this can be adapted to scripts or SPs as well?

I have the same need (to save messages) but I'm not using a Job.

Thanks


Wednesday, January 23, 2013 - 6:35:21 PM - Ludwig Guevara Back To Top

Hi,

Not too often. When a job fails, it's better to receive an email with the issue, instead of going every time to check the history of the failure job. I Add to most of the jobs an extra step "If the job Fails" to monitor it. Create a store procedure with the below code, looking for last know job issue, parse it to remove SQL stuff (see replace) and send an email.

SELECT TOP 1 CONVERT( varchar(MAX),
            REPLACE( REPLACE( REPLACE( REPLACE( REPLACE( REPLACE( REPLACE(
            REPLACE( REPLACE( jh.Message, '--','' ), '[SQLSTATE 01000]', '[SQL]' ), 'Progress', 'Prg' )
                    , 'Started', 'Sta' ), 'End Prg', 'EPrg' ), 'Executing query', 'Eqry' )
                    , 'Source', 'Src' ), '(Message 0)', '' ), '[SQLSTATE 40001] (Error 1205)', 'FStp' ) )
        FROM msdb.dbo.sysjobs j
        INNER JOIN msdb.dbo.sysjobhistory jh ON j.job_id=jh.job_id
        WHERE run_status = 0
            AND run_date = CONVERT(varchar, getdate(),112)
            AND run_time > REPLACE( CONVERT(varchar, DATEADD( n, - @Minutes, getdate() ),108), ':','')
            AND j.job_id=@JobID )


Monday, January 14, 2008 - 8:02:42 PM - timmer26 Back To Top

110!  Wow I thought I had it bad with being the only DBA in charge of 800+ databases on 70 instances!  I am probably in the same situation you find yourself in - too much information and not enough time to read up on it all.  I've not even cracked any of the shrink wrap on the CTPs, nor have I sat in on any 2008 sessions at PASS, TechEd, or Connections.  I have run across a work-around for cleanup when it comes deleting backups created by a custom DB backup job; similar to the "Delete Files Older Than XX hours/days/etc." from SQL 7.0 and 2000.  If you're interested I can throw that into a tip for February as it is a little too much info to throw into a forum.

Unfortunately, the relationship between Job and Output File is a 1::1.  Short of rescripting the job prior to each execution I do not see a way to maintain separate log files for each time the job is run.  You've raised a good point and it may force me to unwrap that latest CTP.  I'll post back here if I come up with anything for 2005 or 2008.

 Good luck with those 110 instances.

 - Tim


Monday, January 14, 2008 - 11:25:13 AM - greganson Back To Top

Very nice article, nice formatting, etc.

the only thing missing from 2005 and all related articles regarding output is the ability to cleanup files after a job or job step completes.  For instance, I would like to enable verbose logging for each job, for each time the job runs (a new file with timestamp is created), and then have the job/step delete anything older than 3 weeks.  And I would like to be able to configure this within the same GUI (or drill down) that is in the maint plan wizard.  This still seems to be a very inconvenient function missing from 2005...and something that you shouldn't have to do with another job with a cleanup step, etc.  I want to build jobs, configure what I want to see out of them, for the time that I want to see them and then let the job maintain space on my drives...set it and forget it mentality...just like we had in SQL 2000. 

 Do you know if this is better in SQL 2008?  I haven't had a chance to load the CTP code yet.  I'm too busy monitoring space on my other 110 SQL instances!!


Learn more about SQL Server tools