Stop job after specified run time

  • I'm testing a long running job and want to stop it after it has been running for, say, 2 hours. How can I do this automatically, i.e. without being around to manually stop it?

  • Should have added. The job is simply executing a stored procedure.

  • There's no way to do this within the query, so what you have to do instead is set up a monitoring job. Start it at the same time as the existing job, have it check the status and then, kill the process (I assume you mean for a kill, nothing else I can think of) when it exceeds a time limit. Or, have a second job start 2 hours later, check for the existence of the first, then kill it if it's there. In fact, I like that second idea better.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Thanks Grant. I presume there are sys tables for jobs and job activity. Not sure if I then do sp_stop_job or KILL process.

  • Create another job that runs every N minutes to monitor it, and execute msdb.dbo.sp_stop_job it if it's been running longer than the desired time.

    The msdb.dbo.agent_datetime(run_date,run_time) function can be used to convert the integer run_date & run_time columns to datetime.

    You could probably modify the following function (determines if job is running) to take job start time into account to determine if job has been running too long, or use job_id parameter instead of job_name if you prefer.

    CREATE FUNCTION [dba].[IsJobRunning]
    (
    @JobName SYSNAME
    )
    RETURNS BIT
    AS
    BEGIN

    DECLARE @job_id uniqueidentifier = (SELECT job_id FROM msdb.dbo.sysjobs WHERE name = @JobName);
    DECLARE @isRunning BIT = 0;

    IF EXISTS ( SELECT *
    FROM msdb.dbo.sysjobs_view
    INNER JOIN msdb.dbo.sysjobactivity ON
    sysjobs_view.job_id = sysjobactivity.job_id
    INNER JOIN msdb.dbo.syssessions ON
    syssessions.session_id = sysjobactivity.session_id
    INNER JOIN (SELECT MAX(agent_start_date) AS max_agent_start_date
    FROM msdb.dbo.syssessions
    ) sess_max ON
    syssessions.agent_start_date = sess_max.max_agent_start_date
    WHERE
    sysjobs_view.job_id = @job_id AND
    sysjobactivity.run_requested_date IS NOT NULL AND
    sysjobactivity.stop_execution_date IS NULL
    )
    BEGIN
    SET @isRunning = 1;
    END

    RETURN @isRunning;

    END

    It's important to join to syssessions: https://sqlstudies.com/2013/09/05/a-t-sql-query-to-get-current-job-activity/

  • kuopaz wrote:

    Thanks Grant. I presume there are sys tables for jobs and job activity. Not sure if I then do sp_stop_job or KILL process.

    I suspect either will work. Testing is your best buddy.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • I have this situation in a few potentially long running clean up jobs.  I did not create a second job, I modified the proc to  run in a loop and test the start time vs current time.  If it hits the time limit, the proc exits.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Great, thanks both. Should be able to do it now. Want to avoid looking in on Monday morning and finding the job is still running!

  • That would actually be easy for me to implement as so happens the proc is a loop - for batch processing with each batch in a transaction. As long as the batch isn't itself too large and taking ages.

  • kuopaz wrote:

    Thanks Grant. I presume there are sys tables for jobs and job activity. Not sure if I then do sp_stop_job or KILL process.

    If you stop a job and the actual transaction by the proc is a single 2 hour long process, when the job is killed, you'll get a rollback.

    Like Michael John suggests, break up the proc (one of the few good reasons to use a WHILE loop) and have it test the time at the beginning of each section.  If there's not enough time for the next section to complete, simply exit the job.  You should probably have a log in place or some output from the proc that tells you how far it got.  The proc itself should have a parameter that identifies the quitting date/time so a job can pass that time without having to do a change in the proc if that time changes.  It would also allow you to let it run with no quitting time by passing '9999" as the quitting date/time (01 Jan 9999).

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • I posed this question to the resident Microsoft DSE years ago where I work.  The solution I was given was to run the SP in a Powershell step in the job with a timeout setting that stops the SP execution after the target number of seconds.  Here's an example to demonstrate this.  Your SP does not have the WAITFOR in it, but the demo SP has it to ensure that the SP takes at least the WAITFOR time so we can make it fail with the time limit in the job.

    USE [myDb] /* change this to your database */GO

    CREATE PROCEDURE [dbo].[sp_TestTimeLimit]

    AS

    BEGIN

    WAITFOR DELAY '00:00:10'; /* 10 second delay */

    PRINT 'hello, world (after delay)';

    END

    GO

    Create a job with a Powershell step to run this SP but set the execution/timeout limit to a lower value (5 seconds) that the wait (10 seconds) in the SP.

    $Server = 'myServer' #change this to your server\instance

    $database = 'myDb' #change to the database name which contains the SP to execute

    $Connection = New-Object System.Data.SQLClient.SQLConnection

    $Connection.ConnectionString = "server=$Server;database=$Database;trusted_connection=true;ApplicationIntent=ReadOnly;Connect Timeout=120"

    $Connection.Open()

    $Command = New-Object System.Data.SQLClient.SQLCommand

    $Command.Connection = $Connection

    $Command.CommandText = 'EXECUTE myDb.dbo.sp_TestTimeLimit'

    $Command.CommandTimeout = 5 #5 seconds limit

    $Reader = $Command.ExecuteReader()

    $Datatable = New-Object System.Data.DataTable

    $Datatable.Load($Reader)

    $Datatable.Rows

    $Connection.Close()

    Then run the job that has the above Powershell SP execution step.  The above example fails the time limit with this error:

    A job step received an error at line 11 in a PowerShell script. The corresponding line is '$Reader = $Command.ExecuteReader() '. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Exception calling "ExecuteReader" with "0" argument(s): "Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding." '. Process Exit Code -1. The step failed.

    Now, change the Powershell job step line that sets the CommandTimeout value to a reasonable value (say 30 seconds) and rerun the job and it should exceed.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply