Azure Data Factory Mapping Data Flow for Datawarehouse ETL

By:   |   Comments (7)   |   Related: > Azure Data Factory


Problem

The advent of Azure's numerous data services has triggered both excitement and confusion in the data warehousing space for several reasons. Customers are interested in the concept of a Modern Cloud-based data warehouse, but may be overwhelmed by the overabundance of cloud-based services available on the market. They are interested to know how they could get started with implementing traditional data warehousing concepts such as dimensional modeling, Slowly Changing Dimensions, Star Schemas, and data warehousing E-T-L or E-L-T in the cloud. Traditional on-premises Microsoft technologies such as SQL Server Integrations Services (SSIS) is a mature data integration tool that has been prevalent for over a decade. As more cloud-based tools are becoming available and customers are interested in either migrating their E-T-L to the cloud, or simply interested in getting started with understanding data warehousing in the cloud, how can they get started and what tools might they use?

Solution

As more companies are expressing interest in a Cloud-based Modern Datawarehouse, Azure Data Factory is continuing to play a significant role in this Data Integration and ETL process. Microsoft continues to meet and exceed this need and interest by expanding their service offerings within Azure Data Factory by recently adding Mapping Data Flows, which allows for visual and code-free data transformation logic that is executed as activities with Azure Data Factory pipelines using scaled out Azure Databricks clusters. Essentially this offering, which is in preview at the time of this writing, brings Azure Data Factory one-step closer to Microsoft's traditional SQL Server Integration Services, which has been used for Data Warehousing ETL for many years. Mapping Data Flows has tremendous potential in the Data warehousing space for several Data warehousing patterns such as Slowly Changing Dimensions Type I, Type II and Fact extraction, transformation, and data loading.

In this article, I will discuss the typical data warehousing load pattern known as Slowly Changing Dimension Type I and how Azure Data Factory's Mapping Data Flow can be used to design this data flow pattern by demonstrating a practical example.

Modern Datawarehouse

Azure Data Factory plays a key role in the Modern Datawarehouse landscape since it integrates well with both structured, unstructured, and on-premises data. More recently, it is beginning to integrate quite well with Azure Data Lake Gen 2 and Azure Data Bricks as well. The diagram below does a good job of depicting where Azure Data Factory fits in the Modern Datawarehouse landscape. As we can see from this diagram, Data Factory is a key integrator between source, target and analytical systems. Additionally, by adding a code free graphical user-based interface such as Mapping Data Flow that utilizes Spark clusters under the hood, Azure Data Factory is sure to play a key role in the design and development of the Modern Datawarehouse.

Modern DW Architecture Diagram

Prepare Azure Resources

As a first step to exploring the services in Mapping Data Flows, we will need a few Azure Resources to be created. Specifically, the following services will need to be running online and can be created by using the Azure portal.

Prerequisites

1) Create a Data Factory: Refer to the following Microsoft document to create an Azure Data Factory. Remember to choose V2 which contain Mapping Data Flow, which is in preview at the time of this article: "Quickstart: Create a data factory by using the Azure Data Factory UI."

2) Create a logical SQL Server and two SQL Databases (OLTP_Source and OLAP_Target): Refer to the following Microsoft article to create a SQL Database: "Quickstart: Create a single database in Azure SQL Database using the Azure portal."

While creating my source SQL database, I will select the Sample AdventureWorksLT OLTP database which will contain sample tables with data that I can then use in my Data Factory pipeline.

Sample AdventureWorks SQL DB

After creating my SQL Server, I will make sure to add my Client IP to the firewall settings so that I can access the SQL Database from my local machine.

SQL Server Show Firewall Settings
Add Client IP Firewall Settings

After configuring my firewall settings, I will navigate to SQL Server Management Studio and will then login using my credentials to verify that I can see my source and target databases. Additionally, my source database will contain the AdventureWorksLT tables, data, and schema.

SSMS view of source and target db

As a last step to verify that I have all my necessary resources created, I will navigate to my resource group in the Azure portal and confirm that I have a Data Factory, Logical SQL Server, Source Database and Target Database.

List of all Azure Resources in RG for this solution

Select a Datawarehouse Design Pattern

Slowly Changing Dimension Type I

Slowly Changing Dimensions are commonly used advanced techniques for dimensional data warehouses and are used to capture changing data within the dimension over time. While there are numerous types of slowly changing dimensions, Type I, which we will cover in this article, simple overwrites the existing data values with new values. The advantage of this method is that is makes updating the dimension easy and limits the growth to only new records. The disadvantage is that historical data will be lost since the dimension will always only contain current values for each attribute.

This demonstration will leverage Slowly Changing Dimension Type I within the Azure Data Factory pipeline.

Create an Azure Data Factory Pipeline and Datasets

Now that my Data Factory (V2) is online, I can begin creating my Pipeline for a Slowly Changing Type I ETL Pattern.

I'll start by clicking 'Author & Monitor' to launch my Azure Data Factory console.

Author and Monitor ADF

I will then create a new pipeline by clicking 'Create Pipeline'.

Click here to create pipeline in ADF

For more information related to creating a pipeline and dataset, check out the tip Create Azure Data Factory Pipeline.

I will name my pipeline DW ETL which will contain the following two datasets:

1) AzureSqlCustomerTable: This is my OLTP Azure SQL Source database which contains my AdventureWorksLT tables. Specifically, I will use the Customer Table for my ETL.

2) AzureSqlDimCustomerTable: This is my OLAP Azure SQL Target database which contains my Dimension Customer Table. This dimension table will vary from the OLTP source table in that it contains fewer fields and contains an InsertedDate, UpdatedDate and HashKey, which I will use for my SCD Type I ETL pattern.

Once the Pipeline and Datasets are created, the Factory Resources will appear as follows:

Image of ADF resources (datasets and pipelines)

Create Azure Data Factory Mapping Data Flow

Now that I have created my Pipeline and Datasets for my source and target, I are ready to create my Data Flow for my SCD Type I. For additional detailed information related to Data Flow, check out this excellent tip on "Configuring Azure Data Factory Data Flow."

I will start by dragging my Data Flow from Activities onto my Data Factory canvas.

Drag and drop data flow

I will give it an intuitive name like SCDTypeI.

Data Flow in the pipeline

Prior to creating the Data Flow, I will make sure to turn on 'Data Flow Debug' mode since I will be testing out my pipeline voraciously prior to deploying it into production.

For more detail on Data Flow Debug mode, check out Microsoft's document: "Mapping Data Flow Debug Mode".

Image of how to select debug mode.

When I toggle on the Debug mode, I will be asked if I wish to proceed.

MSSQLTip12_DebugMode2

Step to turn on debug mode

The clusters will take a few minutes to get ready.

MSSQLTip12_DebugClusters

Debug Mode clusters will spin up

Once Data Flow Debug mode is turned on, a green dot will appear to the right of it. Now I am ready to begin creating my Slowly Changing Dimension Type I Data Flow.

Data Flow debug mode is ON

I will start Data Flow by adding and configuring my two datasets as sources.

Image of setting up dataflow sources

Next, I will click the + icon next to AzureSqlCustomerTable to add a Derived column, which I will call CreateHash.

Step to create hash in data flow

I'll then select my Hash Columns and use the Hash function. For my scenario, my derived column will use the following function:

sha1(FirstName+LastName+CompanyName)
Step to setup Hash in derived column.

I will then add the following inputs, schema and row modifiers to my data flow.

View of scd type I in the dataflow

Exists

This will check if the hash key already exists by comparing the newly created source hash key to the target hash key.

View of Exists step in the DF

LookupDates

This will join the CustomerID from the Source to the Target to ensure we pull in all pertinent records that we need.

Step to add insert and update dates.

SetAttributes

This will add two derived columns. InsertedDate will insert the current timestamp if it is null and UpdatedDate will always update the row with the current timestamp.

Step to add derived columns for Insert and Update Dates.

AlterRows

This will allow for upserts, with a condition set to true(), which will update or insert everything that passes through the flow.

Step to alter rows based on upsert true condition.

Sink1

The sink will write back to the DimCustomer table.

Step to setup sink details.

Note that within the settings tab, 'Allow upsert' is checked, with CustomerID specified as the Key column.

Step to configure Upsert settings in sink.

Lastly, I will ensure that the mapping is accurate. 'Auto Mapping' may need to be disabled to correctly map the newly created derived columns.

Step to view sink mappings in DF pipeline.

Once my Slowly Changing Dimension Type I Data Flow is complete it will look like this.

Completed DF in SCD Type I ADF pipeline

I have now completed designing and configuring my SCD Type I Data Flow. Since I have switched on debug mode, I will simple click 'Debug' in the pipeline view and wait for the pipeline to complete running.

How to run Debug from the pipeline.

Once the pipeline completes running, I will see a green check-circle to the right along with the pipeline succeeded status. Since there was no data in my DimCustomer Table, this pipeline run will load all records to my DimCustomer table.

Image of successfully completed pipeline run.

Updating a Record

To test the functionality of this pipeline, I will pick a record from my OLTP system and Update the LastName which is one of the values in my HashKey.

SELECT * FROM [SalesLT].[Customer] WHERE CustomerID = 148
Image of Last Name record to update

I will then go ahead and update the LastName for CustomerID 148 from Brewer to Update.

UPDATE [SalesLT].[Customer] SET LastName = 'Update' WHERE CustomerID = 148;
SELECT * FROM [SalesLT].[Customer] WHERE CustomerID = 148
Image of Last Name updated

Prior to re-running the ADF pipeline, when I do a select on my DimCustomer Table, I can see that the LastName is still 'Brewer'.

SELECT * FROM [dbo].[DimCustomer] WHERE CustomerID = 148
Image of Last Name record to update in Dim Customer table.

After I re-run my ADF pipeline, I will see that LastName has been updated appropriately which confirms that the ADF pipeline accounts for updates.

Image of Last Name record updated in the DimCustomer table

Inserting a Record

I will also test inserting a new record into my OLTP system and then re-run my ADF pipeline to see if the insert is picked up.

Image of Insert record statement.
USE [mssqltipsOLTP_Source]
GO
 
INSERT INTO [SalesLT].[Customer]
          ([NameStyle]
          ,[Title]
          ,[FirstName]
          ,[MiddleName]
          ,[LastName]
          ,[Suffix]
          ,[CompanyName]
          ,[SalesPerson]
          ,[EmailAddress]
          ,[Phone]
          ,[PasswordHash]
          ,[PasswordSalt]
          ,[rowguid]
          ,[ModifiedDate])
    VALUES
         (1
          ,'Mr.'
          ,'John'
          ,'B'
          ,'Doe'
          ,NULL
          ,'MSSQLTips'
          ,NULL
          ,NULL
          ,NULL
          ,'L/Rlwxzp4w7RWmEgXX+/A7cXaePEPcp+KwQhl2fJL7d='
          ,'1KjXYs4='
          ,'2005-08-01 00:00:00:000')
GO

My new record has been inserted into the [SalesLT].[Customer] Table.

SELECT * FROM [SalesLT].[Customer] WHERE CompanyName = 'MSSQLTips'
Image of Inserted record select statement.

After I re-run my ADF Pipeline, I can see that the new OLTP record has also been inserted into my [dbo].[DimCustomer] table, which confirms that the ADF pipeline accounts for inserts.

Image of Inserted record in DimCustomer table after pipeline run.

Remember to turn off 'Data Flow Debug' mode when finished to prevent un-necessary costs and unused utilization within Azure Data Factory.

Also, remember to cleanup and delete any unused resources in your resource group as needed.

MSSQLTip12_DeleteRG

Delete RG and associated resources after finished with the solution.
Next Steps
  • In this article, we discussed the Modern Datawarehouse and Azure Data Factory's Mapping Data flow and its role in this landscape. We also setup our source, target and data factory resources to prepare for designing a Slowly Changing Dimension Type I ETL Pattern by using Mapping Data Flows. Additionally, we designed and tested a Slowly Changing Dimension Type I Data Flow and Pipeline within Azure Data Factory.
  • Check out Slowly Changing dimension for more information related to all SCD Types including advantages and disadvantages of each type.
  • Check out Microsoft's article: Get started quickly using templates in Azure Data Factory, as there are templates being added for Slowly Changing Dimension Type II, data cleansing and more.
  • Read The Datawarehouse Toolkit by Ralph Kimball for a definitive guide to Dimensional Modeling.
  • Explore creating Azure Data Factory pipelines for Slowly Changing Dimensions Type II and Fact Loading.


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Ron L'Esteve Ron L'Esteve is a trusted information technology thought leader and professional Author residing in Illinois. He brings over 20 years of IT experience and is well-known for his impactful books and article publications on Data & AI Architecture, Engineering, and Cloud Leadership. Ron completed his Master’s in Business Administration and Finance from Loyola University in Chicago. Ron brings deep tec

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Tuesday, May 4, 2021 - 9:03:24 PM - Steve Neumersky Back To Top (88645)
What do you consider the best way to implement AuditPkgExecution, AuditTableProcessing, and DimAudit using Azure Data Factory and Azure Synapse Analytics?

Friday, February 19, 2021 - 4:58:21 AM - S. M. Mousavi Jahan Back To Top (88263)
Hi Ron L'Esteve,

Very nice and helpful article for learning ADF.

I was able to follow all steps in the article almost without any problem.
I just faced one problem about defining the schema of dbo.DimCustomer (at the beginning of article), which took me several hours.
I suggest mentioning the following sql script in the article.

CREATE TABLE dbo.DimCustomer (
[CustomerID] INT NOT NULL,
[Title] NVARCHAR (8) NULL,
[FirstName] [dbo].[Name] NOT NULL,
[MiddleName] [dbo].[Name] NULL,
[LastName] [dbo].[Name] NOT NULL,
[Suffix] NVARCHAR (10) NULL,
[CompanyName] NVARCHAR (128) NULL,
[SalesPerson] NVARCHAR (256) NULL,
[EmailAddress] NVARCHAR (50) NULL,
[InsertedDate] DATETIME NULL,
[UpdatedDate] DATETIME NULL,
[HashKey] NVARCHAR (128) NOT NULL,
PRIMARY KEY (CustomerID)
);

Thanx a lot for this valuable article.

Wednesday, February 5, 2020 - 10:35:32 AM - Ron LEsteve Back To Top (84177)

Hello,

I have a new article which talks a little about metadata and parameter driven pipelines for multiple tables, which are excellent features of ADF and Mapping Data flows.

This article may provide some inspiration for Metadata Driven Ingestion frameworks that you may be able to leverage for your use case:

https://www.mssqltips.com/sqlservertip/6302/azure-data-factory-pipeline-to-fully-load-all-sql-server-objects-to-adls-gen2/

Thanks


Wednesday, February 5, 2020 - 8:10:56 AM - Balan Back To Top (84167)

Hi Ron L'Esteve,

Excellent Article. 

Can we do parameterized Data Flow with SCD Type1 supported for multiple tables. With the above article I could able to do SCD type1 for a single table.

I am working on Metadata Driven Data Ingesiton Framework building using Azure Data Factory with SCD (Type 1 & 2) capabilities.

Appreciate your support on this.


Monday, November 11, 2019 - 9:47:16 AM - Ron LEsteve Back To Top (83054)

Hi John: I am also excited about the Synapse's ability to seamlessly integrate multiple platforms and would think this process could be taken much further since Mapping Data Flows has recently become generally available. I would recommend following (Mark Kromer https://kromerbigdata.com/) to learn more about the capabilities of ADF Mapping Data Flows with practical examples.

 

Hi Frank: 'AzureSqlDimCustomerTableStage' is the alias that I gave to the Source activity in the data flow pipeline stream, so no additional datasets would be needed and the alias can be named as desired to match the dataset names.


Sunday, November 10, 2019 - 6:46:33 PM - John Broomfield Back To Top (83048)

With the release of Synapse does this process change drastically if you are building a new datawarehouse (not lifting and shifting)? This is a very comprehensive demo and I was looking for a tutorial or book that will help me learn and use Azure Data Pipeline and Synapse. Any suggestions would be greatly appreciated.


Monday, September 16, 2019 - 11:33:58 AM - Frank Fan Back To Top (82471)

Nice one!

A confusion is that only two data sets are prepared (AzureSqlCustomerTable; AzureSqlDimCustomerTable), but when adding Exists, its Right stream property is set to another new data set (AzureSqlDimCustomerTableStage). should this one be created as a separate dataset as well? thanks,















get free sql tips
agree to terms