How to use GitHub Actions to Deploy Database Changes After a Commit

By:   |   Updated: 2024-04-30   |   Comments   |   Related: > DevOps


One integral part of modern Continuous Integration and Continuous Development (CI/CD) pipelines is the ability to create the underlying infrastructure, assets, and database objects as part of the pipeline in a consistent and repeatable manner.

While Infrastructure as Code (IaC) offers the capability of creating the underlying infrastructure, database objects and scripts, there are still very challenging aspects of the pipelines since there are many constraints that we should be aware of. We can drop or damage valuable data because of a bad deployment or application of the scripts (for example, recreating a table and dropping its contents).

If we are aware of those constraints and create robust scripts, we can automate deployments once we push changes to our code repository. How can we automate database deployments from script files stored on GitHub to an Azure SQL database?


If we source our SQL scripts in a GitHub repository, we can use GitHub Actions to trigger pipelines, actions, or events after a commit to the repository. Actions allow us to deploy to different environments based on the branch we commit.

For this tip, we will execute .sql scripts to an Azure SQL database when we commit directly to Main.

Microsoft already provides an action called sql-action, so we do not have to worry about coding our custom deployment script.

This tip creates a simple workflow to deploy .sql files using this action after a commit to the Main branch.


For this example, we need to set up the following:

  • Azure SQL database
    • You can use a free tier.
    • Be careful if you test in an environment where no tables can be dropped
    • Make sure your scripts have an IF logic to determine whether the objects exist before dropping and recreating them.
    • Make sure your firewall access is configured correctly so the database can be accessed via a connection string.
    • Below, you can see I already have an Azure DB with a few tables:
Azure SQL db already in place
  • GitHub repository in place
    • You can see that this repo already has some tables as .sql files. (Assume we create them manually, and we will automate further changes.)
My GitHub Repo

Configure a GitHub Secret

We will securely store our connection string via a GitHub Secret.

If you do not want to store credentials outside of Azure, you can modify the action to use SAS tokens.

To obtain your Azure SQL database connection strings, go to the Azure Portal, then Connection Strings:

Configuring GitHub secret, obtaining connection string

Remember to update your username and password. Then, go to your GitHub Repository and click Settings.

Configuring GitHub secret, settings

Next, go to Secrets and variables and click Actions.

Configuring GitHub secret, locating secrets

Click New repository secret:

Configuring GitHub secret, new repo secret

Type the secret name and the connection string, then click Add secret:

Configuring GitHub secret, adding secret

The name of the secret is what you will use in the action to connect to the Azure SQL database.

Configuring GitHub secret, checking secret name

Setting Up GitHub Action

If you are new to GitHub Actions, you should know that actions are set up by adding a folder named .github/workflows/, and then the actions are declared via .yml files.

Multiple actions are available on the marketplace, or you can define your own. You can programmatically create the actions by just committing the .yml files to the .github/workflows/ folder (create it manually if it does not exist), or you can check and create actions via the GitHub web page.

Since this is a beginner tutorial, we will use the website. In your GitHub repository, go to Actions:

Setting up new action

You can see multiple actions already available on the marketplace. You can browse them on your own.

To use sql-action, we need to create an empty workflow. Click set up a workflow yourself.

Setting up new action, new workflow

On the next page, name your file sql-workflow.yml. In the code, add the following:

# .github/workflows/sql-workflow.yml
on: [push]
    runs-on: ubuntu-latest
    - uses: actions/checkout@v3
    - uses: azure/[email protected]
        connection-string: ${{ secrets.AZURE_SQL_CONNECTION_STRING }}
        path: './Database.sqlproj'
        action: 'publish'

We need to add a new sqlproj file to deploy multiple scripts since a database project is needed for this, as we will see in a moment. For now, only create the action.

Also, use the secret name created in the previous step.

Your action should look like this:

Setting up new action, sql workflow

Once you have validated everything is ok, commit your changes, then commit directly to the Main branch:

Setting up new action, commiting yml

The workflow will trigger automatically after we deploy the file. So, this execution will fail because the database project file does not exist:

Setting up new action, firs workflow fails

Following the action documentation, add database project file Database.sqlproj, and commit directly to Main:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build">
  <Sdk Name="Microsoft.Build.Sql" Version="0.1.3-preview" />
    <ModelCollation>1033, CI</ModelCollation>

Your file should look like this. You can commit it from your local branch or via the webpage:

Setting up new action, creating database project file

After committing the file, the workflow will trigger again. If everything is set correctly, the workflow will succeed:

workflow execution succeeds

If you check the logs, you will see nothing was applied to the database because we have not committed any .sql file yet (new or modified).

Nothing changed on first execution

Note: The Azure SQL timeouts are because I am using a serverless database.

At this point, everything is set up, our action is up and running, and we are ready to evaluate it.

Evaluating the Action and Validating the Script Deployment

First, we will evaluate a new script. For this, you can add new tables or new files. I am adding new tables:

adding more tables

We need to commit them first:

commit new files

And then push the changes:

push the changes

When the push is successful, we will automatically see how the workflow triggers and the objects are deployed:

workflow run log, tables are created

We have validated that only new objects are deployed, not the existing ones.

We can double-check browsing tables directly on the Azure SQL database:

we validate in SQL that tables are created

What if we want to modify an existing file? Are the changes deployed as well?

We can create a simple stored procedure (it can be anything you want) and then repeat the previous steps to commit it:

Adding a new SP

We validate the workflow automatically triggers and create it:

workflow creates SP

Now, return to the procedure and modify it. For example, you can add or remove a column in the code. Any change is valid if the procedure code changes:

We modify SP

Note: You can see that the procedure code still has the CREATE PROCEDURE at the beginning. But as you will see, when we commit it, an alter is done. This is one of the cool things about this deployment:

Modified SP is altered

OK, but what about tables? Will the table be dropped if we add a new column?

That is another cool feature. Even with the Create statement, the table is being altered, and I can prove it. Let's modify any table that already has data; the Customers table has 15 rows:

validating data in a table

What if we add a new column to it? I have added a premium_customer column at the end:

modifying table

Then, we commit the changes and push it to GitHub:

commiting changes

We can see that the execution succeeds even when the table exists:

the workflow run automatically triggers.

If we check the log, we can see that an ALTER was performed:

We validated an ALTER table is performed

If we recheck the table, we can see that the data is still there, and the new column was added:

we validate the data is still there and the new column was added
Next Steps

sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip

About the author
MSSQLTips author Eduardo Pivaral Eduardo Pivaral is an MCSA, SQL Server Database Administrator and Developer with over 15 years experience working in large environments.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips

Article Last Updated: 2024-04-30

Comments For This Article

get free sql tips
agree to terms