Execute and Track T-SQL Script Executions with PowerShell

By:   |   Comments (5)   |   Related: More > Database Administration


Problem

We maintain an environment with thousands of databases for different applications and internal functions and have recently faced challenges with deploying some database changes to specific environments, or environment groups without affecting other groups and allowing for some automation. Since multiple teams are involved - from developers to DBAs - and these changes do not affect the objects that our applications need (tables, views, etc.), we want this to be separate from our deployment changes. We would like to dictate what database environments the scripts will be executed against and the time they are executed, while being able to keep these records and script files for tracking purposes if we need changes.

Solution

To start with a solution for this problem, let's consider three questions:

  1. What happens if an invalid change occurs? In the example of developers, do they write another script and revert the changes? Do the DBAs restore a transaction log backup? Was a backup table saved and it becomes reverted? If it's for DBA maintenance, is reversion needed?
  2. Who is in charge of reverting a change, if an erroneous change occurs? Think of an example where a development team adds invalid configuration data to application-specific data within a database. Who reverts those data changes?
  3. How are our databases delineated? In the case of configuration data, are these data within databases separate from databases with customer information? This matters because reverting a change for a database by restoring a transaction log may complicate an application if a database stores both configuration and customer data.

This tip will show one way in which we can dictate where scripts are executed by tracking the scripts and the time they're executed, and we should consider the above questions as they will determine what we do next from here. For an example, if a script changes configuration data in a database that also holds customer data, as the DBA, I would require the configuration data to be backed-up before a change, as reverting a change through a transaction log would not be compatible with the database use-case (mix of configuration and customer data), unless the DBAs are willing to restore a copy and compare configuration data changes.

Create our table for scripts and environments

We'll start off by creating a table with a unique Id field (Id), server and database information, and a timestamp field with a null value. The timestamp field will only be updated when the script has been successfully executed and we want the server and database information to allow for new servers, so we avoid using a table structure that would require column expansion. Since the timestamp and id field will be used for either executing new scripts (timestamp) or updating the table when a script was executed (Id), these fields will be indexed.

CREATE TABLE tbExecScript(
 Id INT IDENTITY(1,1),
 ServerName VARCHAR(100) NOT NULL,
 DatabaseName VARCHAR(100) NOT NULL,
 ScriptName VARCHAR(100) NOT NULL,
 ExecuteDate DATETIME NULL
)

ALTER TABLE tbExecScript 
ADD CONSTRAINT PK_tbExecScript_Id PRIMARY KEY CLUSTERED (Id);

CREATE NONCLUSTERED INDEX IX_tbExecScript_ExecuteDate ON tbExecScript (ExecuteDate)

INSERT INTO tbExecScript (ServerName,DatabaseName,ScriptName)
VALUES ('OurDevServer','OurDevDB1','permissions1')
 , ('OurQAServer','OurQADB1','permissions1')
 , ('OurDevServer','OurDevDB2','addindexdata')
 , ('OurPreProdServer','OurPreProdDB1','updatetable')

SELECT Id, ServerName, DatabaseName, 'C:\OurLocation\' + ScriptName + '.sql' AS ScriptName FROM tbExecScript WHERE ExecuteDate IS NULL
Query result

We will use the above query in our PowerShell function. The C:\Location\ is where the scripts are stored and this would change depending on where scripts may be stored. Since the files are SQL files, we attach the .sql to the end of the files.

Create our build function to execute scripts

With the above query, we load the results into a datatable and loop through each record, calling the Invoke-SqlCmd function and passing in the server, database and file name. In the below example, we are using integrated security to connect to other instances. Once a script has been executed on the appropriate server, we update the timestamp on the table using the Id field.

Function Get-ExecuteScripts {
    Param(
        [Parameter(Mandatory=$true)][string]$server
        , [Parameter(Mandatory=$true)][string]$database
    )
    Process
    {
        $getscriptinfo = "
        SELECT Id, ServerName, DatabaseName, 'C:\OurLocation\' + ScriptName + '.sql' AS Script FROM tbExecScript WHERE ExecuteDate IS NULL
        "
        
        $scriptscon = New-Object System.Data.SqlClient.SqlConnection
        $scriptscon.ConnectionString = "Data Source=$server;Initial Catalog=$database;Integrated Security=true"

        $scriptcmd = New-Object System.Data.SqlClient.SqlCommand
        $scriptcmd.Connection = $scriptscon
        $scriptcmd.CommandText = $getscriptinfo
        $scriptcmd.CommandTimeout = 0

        try
        {
            $scriptscon.Open()
            $scriptdetails = $scriptcmd.ExecuteReader()

            $scriptdatatable = New-Object System.Data.DataTable
            $scriptdatatable.Load($scriptdetails)

            foreach ($scriptrow in $scriptdatatable.Rows)
            {
                $scriptid = $scriptrow["Id"]
                $scriptsrv = $scriptrow["ServerName"]
                $scriptdb = $scriptrow["DatabaseName"]
                $scriptfile = $scriptrow["Script"]

                Write-Host "Execute $scriptfile on server $scriptsrv.$scriptdb (id of $scriptid)"
  Invoke-SqlCmd -ServerInstance $scriptsrv -Database $scriptdb -inputfile $scriptfile 
            }
        }
        catch [Exception]
        {
            Write-Warning "Get-ExecuteScripts (Connection: [$server].[$database])"
            Write-Warning $_.Exception.Message
            Write-Warning "Query: $getscriptinfo --Id $scriptid"
        }
        finally
        {
            $scriptscon.Dispose()
            $scriptcmd.Dispose()
        }
    }
}


Get-ExecuteScripts -server "" -database ""
Function return

Add our command to the script for updating values in our table

Since we have an open connection to the table we're reading data for executing scripts on other servers, we will use this open connection to update the table once the script has been executed. Under the existing command, we'll add our update command:

$scriptcmd = New-Object System.Data.SqlClient.SqlCommand
$scriptcmd.Connection = $scriptscon

$scriptcmd.CommandText = $getscriptinfo
$scriptcmd.CommandTimeout = 0

$scriptupdate = New-Object System.Data.SqlClient.SqlCommand

$scriptupdate.Connection = $scriptscon

Next, on each loop we'll add the command text that will update the id on each iteration:

$scriptupdate.CommandText = "UPDATE tbExecScript SET ExecuteDate = GETDATE() WHERE Id = $scriptid"

try
{
    $scriptupdate.ExecuteNonQuery() | Out-Null
}
catch [Exception]
{
    Write-Warning "Get-ExecuteScripts (Connection: [$server].[$database])"
    Write-Warning $_.Exception.Message
    Write-Warning "Query: UPDATE tbExecScript SET ExecuteDate = GETDATE() WHERE Id = $scriptid"
}

The above script will update the table once the script has been executed on the appropriate server and database. If we get an error on the script execution, this step will be skipped, as it will mean the script was not successfully executed. Within a try-catch, the first failure will stop the try and move to the catch; in this case, we want to specify where the script stopped - either on the execution or the update, so we'll wrap the update in its own try-catch block. The below code sample shows the functionality of a try-catch with multiple statements and notice how the first failure within a try block will move to the catch.

try
{
    Write-Host (10/0).ToString()
    Write-Host "Example one: trying ..."
    Write-Host (20/2).ToString()
}
catch [Exception]
{
    Write-Host $_.Exception.Message
}


Write-Host ([Environment]::NewLine)


try
{
    Write-Host (10/2).ToString()
    Write-Host "Example two: trying ..."
    Write-Host (20/0).ToString()
}
catch [Exception]
{
    Write-Host $_.Exception.Message
}


Write-Host ([Environment]::NewLine)


try
{
    Write-Host "Try 1"
    Write-Host (10/2).ToString()
    
    try
    {
        Write-Host (20/0).ToString()
    }
    catch [Exception]
    {
        Write-Host "Catch 2"
        Write-Host $_.Exception.Message
    }
}
catch [Exception]
{
    Write-Host $_.Exception.Message
}
Try-catch result

The update can fail, even if the scripts run. If the scripts fail, the function will terminate. In this example, we see the result with four script executions once each script has been executed on their appropriate servers:

Script execution

Finally, in some environments, we will want to consider security practices as well when thinking about this design:

  • If our scripts execute from an administration server, consider separating development environments from production environments. In other words, we may have a administration server for lower environments and one for higher environments.
  • Only allow port connections to the SQL Servers where scripts will be run. All other connections to and from the servers will be off with an exception being a port being open for files to be moved and then disabled.
  • Add a validation step when a developer submits a script; this adds a layer of security by putting the DBA in charge of migrating the script and adds oversight.
Next Steps
  • Scripts involved in changing the design or architecture should follow a normal deployment process where the change occurs with the application and both are tested. For adding, updating, and (or) removing data and possibly some DBA architectural changes (indexing, stats, etcs), this is one way to help track these scripts.
  • If you group servers by function, application, or another pattern, you can create a table with the server groups and join to it from the script table. While this may reduce complexity if consistent, it may add complexity if inconsistent.
  • Consider the most appropriate security design for your environment which allows flexibility for running scripts across multiple servers.


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Tim Smith Tim Smith works as a DBA and developer and also teaches Automating ETL on Udemy.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Friday, May 15, 2020 - 9:54:52 PM - Erick Vounzi Back To Top (85678)

Can I have the full script pls?


Wednesday, October 16, 2019 - 8:07:19 AM - Steve Back To Top (82797)

Hi, thanks for this really helful, this is all new to me :)

So I understand the DB part and creating the PS function ExecuteScripts but the writing back and capturing the errors, if any.

How do I put this all in one script, including the calling of ExecuteScripts

Can I add columns to the table and write the error to that, the query looks for nulls in the date but we are updating it, couldn't it write a history to another table

thanks


Tuesday, January 16, 2018 - 9:57:15 AM - Tim Back To Top (74968)

@Praveen

For data scripts (insert, update, delete), I still want to use T-SQL logic to catch errors or invalid T-SQL, as a rollback will be costly.  In the case of capturing this output, rather than relying on the standard output, I would add validation at the end of the T-SQL, such as ...

$server = "OurServer"

$database = "OurDatabase"

$query = "UPDATE OurTable SET ID = 1  SELECT @@ROWCOUNT RowsAffected"

$rowsaffected = Invoke-Sqlcmd -server $server -database $database -Query $query

$write = "Query: $query, Row(s) affected: " + $rowsaffected.RowsAffected

Add-Content "c:\example\log.txt" $write

 

OR:

 

$server = "OurServer"

$database = "OurDatabase"

$query = "UPDATE OurTable SET ID = 1  SELECT @@ROWCOUNT RowsAffected"

$rowsaffected = Invoke-Sqlcmd -server $server -database $database -InputFile "C:\log\1.sql"

$write = "Row(s) affected: " + $rowsaffected.RowsAffected

Add-Content "c:\example\log.txt" $write

 

OR:

 

$server = "OurServer"

$database = "OurDatabase"

$query = "UPDATE tblCreateData 

SET ID = 5  

 

SELECT CAST(@@ROWCOUNT AS VARCHAR(1000)) + ' row(s) updated and message here that I want.' CustomMessage"

$custommessage = Invoke-Sqlcmd -server $server -database $database -Query $query

Add-Content "c:\example\log.txt" $custommessage.CustomMessage


Sunday, January 7, 2018 - 2:55:22 PM - Praveen Kumar Back To Top (74881)

Hi Tim Smith,

I'm pretty new to the powershell.Your scripts are very interesting above one as well as the number of DML operation in T-SQL scripts.

I have  return powershell  scripts to execute many T-SQL  by generating the log file for each T-SQL script execution but in the log files i'm unballe to get resuts like command executed or 7 row affected by using the Invoke-sqlcmd command. I try to use Verbise to capture but unlucky.

Do you know any other options. 

Thanks for your answer inadvance.

 

 

 

 


Friday, December 29, 2017 - 9:21:05 AM - Alen Back To Top (74567)

 for any changes to stored procedures, i'd probably add some logic to create any affected procedure as a new one with the data appended to the name and then run the alter procedure statement on the real one. This way if there is some bug it's easy to revert

 















get free sql tips
agree to terms