Collecting Data with Elastic Database Jobs in SQL Azure

By:   |   Updated: 2018-10-02   |   Comments (3)   |   Related: More > Azure

Moving SQL Server workload to the cloud

Free MSSQLTips Webinar: Moving SQL Server workload to the cloud

Attend this session to learn how managing performance is even more critical with Azure SQL DB and Azure Managed Instance. Learn what key performance indicators are most important, what auto-tuning really means and get some tools to help you identify performance issues and correctly size your database.


Collating information from many different Azure SQL Server Databases can be done in several different ways. However, many of these require rearchitecting or building external services to connect to and then extract data from the databases and then trying to store it in a central repository. All of this can be a problem when trying to setup and manage a simple deployment scenario.


As well as having the ability to schedule and run tasks, Elastic Database Jobs also have a facility that allows them to collect data from a target database and easily store it in a central location. Here we will explore how we can collect data from user defined objects with Elastic Database Jobs.

Collecting Data for Elastic Database Jobs

Elastic Database job steps, as well as being able to run T-SQL on a schedule, can take the output of a query in the job step and pass the output to another database and store it in a table. To make use of this functionality we need to consider a few additional elements over a job step that simply executes activity on the target database. Primarily we need to have a target Azure SQL Database with results tables in place, as well as configuring Database Scoped Credentials that will be used to connect to the store and load the data we want to collect. In theory this could be the same Database Scoped Credential used to execute the job step. However, I would argue that from a security modularization perspective multiple credentials is a better option.

In the example below I am going to store maintenance information in the job database as this is a logical place for it to reside given the nature of Elastic Database Jobs.

Collecting Maintenance Data to the Job Database

Note: This task builds on my earlier post Index Maintenance in Azure SQL Database with Ola’s Scripts and Elastic Database Jobs. I would recommend reviewing this post to provide greater context to this section.

One of the great things about Ola Hallengren’s maintenance solution is that it is highly versatile and can log the executed commands to a table if you set the appropriate parameter. In my previous post I placed this table into the target database, now we will look to collect that data and store it centrally in our job database, so we have one location to view all our index maintenance operations.

Before we create the job step to collect the data, we need to perform the following actions;

  • Create the results table that we are going to store data in
  • Create a login on the Azure SQL Server
  • Create a user for the login in our jobs database that will allow the job step to connect to the database and load the data
  • Grant the user permissions on the table to load the data
  • Create a Database Scoped Credential for the login/user so that we can pass this to the job step

Once all of these are in place we can then create our job step.

Create Results Table

There are two options available to us when it comes to storing the command log table data. One is to store the raw data in its native format. The other is to base the collection on the analysis script that Ola publishes for free, which extracts data from the command log for more detailed analysis.

Here I am going to pull the raw data back and store that, the reason being that we can then do subsequent analysis as needed and are not restricted to the subset of data we collect.

I will be creating a new schema to group all my objects together and provide more granular access so that I can have different types of user accessing the data or the job management system.

CREATE SCHEMA [Maintenance];

While we could just take the definition of the command log table and create it here, I want to look at adding some additional information into it to help us identify which server and database the data came from.

Note: There is a need for an additional column “Internal_execution_id” which is part of the output from the job step that returns the data.

CREATE TABLE [Maintenance].[CentralCommandLog]
    [internal_execution_id] [UNIQUEIDENTIFIER] NOT NULL, --// Additional column needed for data output in job step
    [ID] [INT] NOT NULL,
    [AzureSQLServer] [sysname] NOT NULL, --// Additional column for source server
    [DatabaseName] [sysname] NULL,
    [SchemaName] [sysname] NULL,
    [ObjectName] [sysname] NULL,
    [ObjectType] [CHAR](2) NULL,
    [IndexName] [sysname] NULL,
    [IndexType] [TINYINT] NULL,
    [StatisticsName] [sysname] NULL,
    [PartitionNumber] [INT] NULL,
    [ExtendedInfo] [XML] NULL,
    [Command] [NVARCHAR](MAX) NOT NULL,
    [CommandType] [NVARCHAR](60) NOT NULL,
    [StartTime] [DATETIME] NOT NULL,
    [EndTime] [DATETIME] NULL,
    [ErrorNumber] [INT] NULL,
    [ErrorMessage] [NVARCHAR](MAX) NULL

EXEC sys.sp_addextendedproperty @name = 'ReferenceSource',                                                                         -- sysname
                                @value = 'Based on the work of Ola Hallengren, maintenance solution :', -- sql_variant
                                @level0type = 'SCHEMA',                                                                            -- varchar(128)
                                @level0name = 'Maintenance',                                                                       -- sysname
                                @level1type = 'TABLE',                                                                             -- varchar(128)
                                @level1name = 'CentralCommandLog';

Now we have the table which we are going to store our data in.

Setup Security for Loading Data

Before we can load the data, we need to define the security principals that will be used to connect to the database with. Here we will now create a login, user, and database scoped credential in the jobs database.

Create the Login in the Master database on the Azure SQL Server.

--// This needs to be executed in Master on the Azure SQL Server that hosts the Elastic Database Jobs database.
CREATE LOGIN [LoadMaintData]
   WITH PASSWORD = N'[email protected]!';

Now create the user and assign to database role. Presently the user needs to be a member of db_owner to allow the insert.

--// This needs to be executed in the context of the Elastic Database Jobs database.
CREATE USER [LoadMaintData]
   FOR LOGIN [LoadMaintData];
ALTER ROLE db_owner ADD MEMBER LoadmaintData;

Now we can create the database scoped credential that will be supplied to the job step to insert the data into our central logging table.

--// This needs to be executed in the context of the Elastic Database Jobs database.
      IDENTITY = 'LoadMaintData',
      SECRET = ' [email protected]!';

Now we are all set to go and collect our data.

Collector Job Step

Assumption: For this to work as-is, I am assuming that you will have read and setup the index maintenance routine as outlined in my earlier tip “Index Maintenance in Azure SQL Database with Ola’s Scripts and Elastic Database Jobs”

Add Collector Job Step

Job step to collect data from many Azure SQL Databases and return it to one location.

Here we are creating a job step that we will add to the index maintenance job, this will be the final step in the job. You can download the full script from here.

The key things to point out in the script here are the additional parameters needed to handle the collection of the data and its output to the database of our choice. Additionally, the code block will purge the data from the table once we have loaded it to a staging table before selecting it out as the final step. This will mean that we do not get duplicate data.

Note: If the for whatever reason the write to the target fails then the activities will not be rolled back in the job step command.

When the job runs it will collect the data from the dbo.CommandLog tables on each database that it executes on and then store it in the central repository that we have specified. Now that we have the data in one place we can start to analyze it effectively. Ola publishes a script that extracts more detailed information from the command log for the index operations. Because we have made some changes in the way that we are now storing the data we need to update the script a little to account for the new table name and columns.

I am going to create this as a view in the job database so that it is easy to query and perform further analysis on if needed with tools such as Power BI.

--// Script based on original work by Ola Hallengren -
--// Original Script downloaded from
CREATE VIEW Maintenance.IndexFragmentationAnalysis
SELECT AzureSQLServer,
           WHEN ObjectType = 'U' THEN
           WHEN ObjectType = 'V' THEN
       END AS ObjectType,
           WHEN IndexType = 1 THEN
           WHEN IndexType = 2 THEN
           WHEN IndexType = 3 THEN
           WHEN IndexType = 4 THEN
       END AS IndexType,
       ExtendedInfo.value('(ExtendedInfo/PageCount)[1]', 'int') AS [PageCount],
       ExtendedInfo.value('(ExtendedInfo/Fragmentation)[1]', 'float') AS Fragmentation,
           WHEN DATEDIFF(ss, StartTime, EndTime) / (24 * 3600) > 0 THEN
               CAST(DATEDIFF(ss, StartTime, EndTime) / (24 * 3600) AS NVARCHAR) + '.'
       END + RIGHT(CONVERT(NVARCHAR, EndTime - StartTime, 121), 12) AS Duration,
FROM Maintenance.CentralCommandLog
WHERE CommandType = 'ALTER_INDEX';

This will give us the output below when we query the view, showing the important details around fragmentation, size, and when the action took place.

Index Fragmentation View Output

Details of index fragmentation collated in one location.


Here we have shown how easy it can be to collect information from many Azure SQL Databases without the overhead of needing to setup and manage large numbers of external objects and Elastic Query. Pulling all the data into one central location reduces the footprint in the source databases as well as making it easy to see information for your whole Azure SQL Database estate for index management.

Next Steps

Review these related tips:

Last Updated: 2018-10-02

get scripts

next tip button

About the author
MSSQLTips author John Martin John Martin is a Data Platform Engineer working in the financial sector as well as Director-at-Large for the PASS organisation and currently a Microsoft Data Platform MVP.

View all my tips
Related Resources

Comments For This Article

Wednesday, July 17, 2019 - 9:21:11 AM - Mohamed BASHA Back To Top (81776)

I have more than 35 table .. I want to cross database to another without this method

Thursday, October 11, 2018 - 7:31:05 PM - Jeremy Kadlec Back To Top (77928)


Thank you for letting us know.  Please review now.

Thank you,
Jeremy Kadlec
Community Co-Leader

Thursday, October 11, 2018 - 6:56:14 PM - Curtis Gregson Back To Top (77927)

The link to the full script is broken




Recommended Reading

Adding Users to Azure SQL Databases

Connect to On-premises Data in Azure Data Factory with the Self-hosted Integration Runtime - Part 1

Transfer Files from SharePoint To Blob Storage with Azure Logic Apps

Process Blob Files Automatically using an Azure Function with Blob Trigger

Reading and Writing data in Azure Data Lake Storage Gen 2 with Azure Databricks

get free sql tips
agree to terms