SQL Server Code Deployment from the File System with a Custom Stored Procedure

By:   |   Comments (5)   |   Related: > DevOps


Problem

Our team wants to develop a Source Control process to apply SQL Server database code changes in various environments. As you can imagine the changes can relate to any of the following: creating new objects (tables, views, functions, stored procedures, etc.), altering objects, etc. Moreover, it was also important to define a Folder Structure to have the scripts run in a particular order, validating the dependencies are met. So how can we achieve these results?

Solution

Before we go into the details, let me layout the folder structure we used that basically mimicked SQL Server Management Studio.

Here's how if looks like

Folder Tree

We are using TFS in our company for the source control and for each release, we'll export the T-SQL scripts (*.sql files) into this working folder structure.

As you might imagine, in the current exhibit, the sample is for Release 1.1 which is depicted by the Folder "Release_1_1".

Now let's look at what goes in each of the Folders:

  • 0_TABLES is numbered as such to be on top of the hierarchy so that it serves as the first folder to be executed. It has several sub-folders and we'll discuss what they should contain.
    • 1_Columns - CREATE TABLE, ALTER TABLE, etc. scripts.
    • 2_Keys - Definition for Primary and Foreign Keys.
    • 3_Constraints - Table constraints.
    • 4_Triggers - Table triggers.
    • 5_Indexes - Index definitions.
    • 6_Statistics - User defined statistics.
  • 1_VIEWS - Views on the base table(s) created in the above step.
  • 2_PROCS - CREATE PROCEDURE, ALTER PROCEDURE, etc. scripts.
  • 3_FUNCS - User defined functions.
  • 4_DBTRIGS - Database Triggers.
  • 5_TYPES - User Defined Data Types.
  • 6_RULES - User Defined Rules on columns for tables.
  • 7_DEFAULTS - Defaults bound to columns for tables.
  • 8_DML - DML statements to populate any Reference Table(s).
  • 9_JOBS - SQL Server Agent Job definitions.

Once the above folders contain the required *.sql files for each database object definition, the following stored procedure can then be put into place to execute the scripts in sequence. For our testing, I created a dummy database [DBATools] and created the stored procedure there.

The stored procedure takes these three parameters:

  • @server - the Fully Qualified Server Name
  • @database - the Database name for the deployment
  • @release - the version number of the release

I want to quickly help understand these flags used in the dynamic-sql for the DIR command

  • /B - Uses bare format (no heading information or summary).
  • /S - Displays files in specified directory and all subdirectories.

I would recommend running the shell command beforehand to see the output.

-- START of the procedure
USE [DBATools]
GO

/*######################################################################################
Author: Prashant Shekhar Create date: Year 2015 Description: Stored Procedure to serve as Deployment Manager for a given Release in TFS sprints Sample Run: EXEC sp_DeploymentManager 'SRVR_name' ,'DB_name' ,'Release_1_1' Revision History: ================ Date Author Comments ---------- -------- -------- MM/DD/YYYY PSHEKHAR Initial Creation #######################################################################################*/ CREATE PROCEDURE sp_DeploymentManager @server VARCHAR(100) ,@database VARCHAR(100) ,@release VARCHAR(100) AS SET NOCOUNT ON EXEC sp_configure 'xp_cmdshell' ,1; RECONFIGURE; DECLARE @root VARCHAR(100) DECLARE @statement VARCHAR(300) DECLARE @StagingFilePath TABLE (fpath VARCHAR(300)) DECLARE @fpath VARCHAR(300) DECLARE @sqlcmd VARCHAR(800) SET @root = 'C:\SourceControl\' SET @statement = 'EXEC master.sys.xp_cmdshell ''DIR "' + @root + @release + '" /B /S''' INSERT INTO @StagingFilePath EXEC (@statement) DECLARE DeploymentManager CURSOR FOR SELECT fpath FROM @StagingFilePath WHERE fpath LIKE '%sql' ORDER BY fpath OPEN DeploymentManager FETCH DeploymentManager INTO @fpath WHILE @@FETCH_STATUS = 0 BEGIN SET @sqlcmd = 'EXEC master.sys.xp_cmdshell ''SQLCMD -S"' + @server + '" -d"' + @database + '" -E -i"' + @fpath + '" >> "' + @root + @release + '\DeployLog.txt"''' EXEC (@sqlcmd) FETCH DeploymentManager INTO @fpath END CLOSE DeploymentManager DEALLOCATE DeploymentManager EXEC sp_configure 'xp_cmdshell' ,0; RECONFIGURE; GO -- END of the procedure

Note that the output from each script execution is captured in the log file - "DeployLog.txt". This is helpful to troubleshoot any exceptions during deployment.

Next Steps
  • Setup a directory structure for your code deployments to match the image above.
  • Create some files to work through a test deployment in one of your Sandbox, Development or Test environments.
  • Review the output and results in your database post deployment.
  • Once you are comfortable with the process consider options to automate your deployments with SQL Server Agent Jobs.
  • Check out all the SQL Server Developer Tips.


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Prashant Shekhar Prashant Shekhar is employed as a DBA with BioTelemetry; specializing in HA solutions comprised of SQL Clustering, Replication and Log Shipping.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Wednesday, March 25, 2015 - 8:26:09 AM - Dave Back To Top (36682)

"Can I ask what the advantage is of doing this as opposed to using SSDT and including a database project in source control?"

  1. Not everyone uses Visual Studio.
  2. Even those that do may not like database projects because 
    1. it is so inflexible.  Short list of examples:  
      1. I may want some scripts applied uptime vs downtime 
      2. I may need control over table scripts to do custom data migration routines that aren't easily coded in the tables folder.  
    2. I like to see the sql that is being run and do custom logging, error handling, etc
    3. The post-deployment folder turns into a hodgepodge of junk that people use as a place to shoehorn scripts.  This makes searching for a migration script a hassle.  
  3. I may need to deploy to sql server express, postgres, or mysql and database projects do not support those alternate technologies.  
  4. Overall this just makes more sense to people than the folder structure SSDT gives you.  

Tuesday, March 24, 2015 - 6:19:20 PM - Anon Back To Top (36671)

Can I ask what the advantage is of doing this as opposed to using SSDT and including a database project in source control?

 

 


Tuesday, March 24, 2015 - 12:22:49 PM - Warren Back To Top (36659)

Great post; I've been thinking about this kind of process and how to implement it. Have you considered using SQL Filetable for the file storage. Do you see any pitfalls/gotchas using that approach?


Tuesday, March 24, 2015 - 11:20:28 AM - johanflamand Back To Top (36658)

I stumbled upon your post, it's very interesting. It remembered me another great post on sql server database best practices.


Tuesday, March 24, 2015 - 9:21:55 AM - Dave Back To Top (36653)

This is a solution I've advocated for years.  My solution is bit more detailed (www.davewentzel.com/MD3)...I use powershell as the "script runner" and I support 'declarative deployments' which are much easier to code than complex DDL.  

A couple of things you may consider:

You may find object dependency errors when your functions are run after your views and procs IF your views and procs reference the functions.  In this case I advocate having a file for each function in the 3_funcs folder with a note indicating that the source code has moved to 1_Views due to object dependency errors.  There are more elegant solutions as well, but this works for us.  

We also like to have subfolders under our, for instance, PROCS folder.  The subfolders are the names of either the schemas or modules.  When you have thousands of procs a little organization really helps.  

Next, your deployment is going to take forever because you are executing sqlcmd for every file.  That's a lot of context switching if you have hundreds of files.  What we do (again, see the link above) is to take every folder and concatenate the .sql files into one big .sql file with GOs between each concatenation.  Then we shell out to sqlcmd for that one big file.  This is at least 10x faster, maybe more, when you have hundreds or thousands of script files.  

I also like to have a folder for lookup data/seed data/master/model data...whatever you call it.  Basically scripts to load data that doesn't change much but the system needs in order to actually work.  I like to load this near the end of the deployment.  

Likewise, I like to keep my foreign key scripts near the end, after the lookup data.  It's much easier (and safer) to apply fk constraints after you've loaded data.  

Finally, I like to have a folder for "bug fixes" and data migration where I can include scripts that change data values, load tables, refactor tables,etc.  This needs to be near the end.  

Great solution you have.  Congratulations on a great tool and thanks for sharing.















get free sql tips
agree to terms