SlideShare ist ein Scribd-Unternehmen logo
1 von 84
Azure Data Factory: Mapping Data Flows
What are mapping data flows?
 Code-free data transformation at scale
 Serverless, scaled-out, ADF-managed
Apache Spark™ engine
 Resilient flows handle structured and
unstructured data
 Operationalized as an ADF pipeline activity
Code-free data transformation at scale
 Intuitive UX lets you focus on building transformation logic
 Data cleansing
 Data validation
 Data aggregation
 No requirement of knowing Spark, cluster management, Scala, Python, etc
vs
Mapping Data Flows Service Architecture
Data Flow Designer UI
Data flow script
Data Flow execution plan monitor
Design, debug, manage data
transformation logic visual in
browser UI
The UI builds data transformation script that contains metadata
and logical intent from user. Script payload is combined with
ADF JSON pipeline definition. ADF spins-up a JIT on-demand
Spark cluster and sends script to Spark job for execution on
scaled-out cluster.
ADF Azure IR configurations define Spark
cluster size for executor job. After executing
your job, ADF presents execution plan
indicating partitioning, timings, data
distribution, and data lineage details
INGEST
Modern Data Warehouse (MDW)
PREPARE TRANSFORM,
PREDICT
& ENRICH
SERVE
STORE
VISUALIZE
On-premises data
Cloud data
SaaS data
Data Pipeline Orchestration & Monitoring
Common Data Flow Customer Scenarios
Common Data Warehouse and Data Analytics Scenarios
w/Mapping Data Flows
• DW Scenarios
• Slowly changing dimensions
• Late arriving dimensions
• Fact table loading
• Data Analytics Scenarios
• Verify data types and lengths
• NULL handling
• Domain value constraints
• Lookups
• PII masking
• Data analytics / aggregations
Data Engineer and Data Scientist Scenarios
1. Data deduping
2. Descriptive data statistics (data profiling)
• Length, type, mean, median, average, stddev …
3. Frequency distribution
4. Missing values
5. Enumerations / Lookups
6. Value replacement
7. Metadata validation
Data Profiling
https://techcommunity.microsoft.com/t5/azure-data-
factory/how-to-save-your-data-profiler-summary-stats-in-adf-data-flows/ba-p/1243251
Replacing Values
• iif (length(title) == 0,toString(null()),title)
Splitting Data Based on Values
Pattern Matching
Enumerations / Lookups
Data De-Duplication and Distinct Rows
https://docs.microsoft.com/en-us/azure/data-factory/how-to-data-flow-dedupe-nulls-snippets
Fuzzy Lookups
Slowly changing dimensions
Fact loading into data warehouse
Metadata Validation Rules
https://www.youtube.com/watch?v=E_UD3R-VpYE
https://mssqldude.wordpress.com/2020/08/31/adf-data-flow-metadata-functions-explained/
Execute data wrangling at scale with embedded PQ experience in pipelines
Authoring mapping data flows
Dedicated development canvas
Building transformation logic
 Transformations: A ‘step’ in the data flow
 Engine intelligently groups them at runtime
 19 currently available
 Core logic of data flow
 Add/Remove/Alter Columns
 Join or lookup data from datasets
 Change number or order of rows
 Aggregate data
 Hierarchal to relational
Source transformation
 Define the data read by your
data flow
 Import projection vs generic
 Schema drift
 Connector specific properties and
optimizations
 Min: 1, Max: ∞
 Define in-line or use dataset
Source: In-line vs dataset
 Define all source properties within a data flow or use a separate
entity to store them
 Dataset:
 Reusable in other ADF activities such as Copy
 Not based in Spark -> some settings overridden
 In-line
 Useful when using flexible schemas, one-off source instances or parameterized sources
 Do not need “dummy” dataset object
 Based in Spark, properties native to data flow
Duplicating data streams
 Duplicate data stream from any
stage of your data flow
 Select ‘New branch’
 Operate on same data with
different transformation
requirements
 Self-joins
 Writing to different sinks
 Aggregating in one branch
Joining two data streams together
 Use Join transformation to append columns from incoming stream to
any stream in your data flow
 Join types: full outer, inner, left outer, right outer, cross
 SQL Join equivalent
 Match on computed columns or use non-equality conditions
 Broadcast small data streams to cache data and improve
performance
Lookup transformation
 Similar to left outer join, but with more functionality
 All incoming rows are passed through regardless of match
 Matching conditions same as a join
 Multi or single row lookup
 Match on all, first, last, or any row that meets join conditions
 isMatch() function can be used in downstream transformations to
verify output
Exists transformation
 Check for existence of a value in another stream
 SQL Exists equivalent
 See if any row matches in a subquery, just like SQL
 Filter based on join matching conditions
 Choose Exist or Not Exist for your filter conditions
 Can specify a custom expressoin
Union transformation
 Combine rows from multiple
streams
 Add as many streams as
needed
 Combine data based upon
column name or ordinal
column position
 Use cases:
 Similar data from different connection
that undergo same transformations
 Writing multiple data streams into the
same sink
Conditional split
 Split data into separate streams
based upon conditions
 Use data flow expression language to
evaluate boolean
 Use cases:
 Sinking subset of data to different
locations
 Perform different calculations on data
depending on a set of values
Derived column
 Transform data at row and column level using expression language
 Generate new or modify existing columns
 Build expressions using the expression builder
 Handle structured or unstructured data
 Use column patterns to match on rules and regular expressions
 Can be used to transform multiple columns in bulk
 Most heavily used transformation
Select transformation
 Metadata and column maintenance
 SQL Select statement
 Alias or renames data stream and columns
 Prune unwanted or duplicate columns
 Common after joins and lookups
 Rule-based mapping for flexible schemas, bulk mapping
 Map hierarchal columns to flat structure
Surrogate key transformation
 Generate incrementing key to use as a non-business key in your data
 To seed the starting value of your surrogate key, use derived column
and a lookup from an existing table
 Examples are in documentation
 Useful for generating keys for star schema dimension tables
 More performant than using the Window transformation with
RowNumber() function
Aggregate transformation
 Aggregate data into groups using aggregate function
 Like SQL GROUP BY clause in a Select statement
 Aggregate functions include sum(), max(), avg(), first(), collect()
 Choose columns to group by
 One row for each unique group by column value
 Only columns used in transformation are in output data stream
 Use self-join to append to existing data
 Supports pattern matching
Pivot and unpivot transformations
 Pivot row values into new columns and vice-versa
 Both are aggregate transformations that require aggregate functions
 If pivot key values not specified, all columns become drifted
 Use map drifted quick action to add to schema quickly
Window transformation
 Aggregates data across
“windows” of data partitions
 Used to compare a row of data against
others in its ‘group’
 Group determined by group by
columns, sorting conditions
and range bounds
 Used for ranking rows in a
group and getting lead/lag
 Sorting causes reshuffling of
data
 “Expensive” operation
Filter transformation
 Filter rows based upon an
expression
 Like SQL WHERE clause
 Expressions return true or false
Alter row transformation
 Mark rows as Insert, Update, Delete, or Upsert
 Like SQL MERGE statement
 Insert by default
 Define policies to update your database
 Works with SQL DB, Synapse, Cosmos DB, and Delta Lake
 Specify allowed update methods in each sink
Flatten transformation
 Unroll array values into individual
rows
 One row per value
 Used to convert hierarchies to flat
structures
 Opposite of collect() aggregate
function
Parse transformation
 Turn string columns that have complex formats into JSON or XML
Sort transformation
 Sort your data by column values
 SQL Order By equivalent
 Use sparingly: Reshuffles and coalesces data
 Reduces effectiveness of data partitioning
 Does not optimize speed like legacy ETL tools
 Useful for data exploration and validation
Rank transformation
 Rank and dense rank options
 Recommended instead of using Window transformation with Rank
function
 More performant than Window rank() and denseRank()
Stringify transformation
 Turn complex data types into strings
Assert transformation
 Build data quality and data validation rules
 Set data expectations
 If column/row values do not pass assertion, assign an ID and a
description
 Can include row/column values in descriptions
Sink transformation
 Define the properties for landing your data in your destination
target data store
 Define using dataset or in-line
 Can map columns similar to select transformation
 Import schema definition from destination
 Set actions on destinations
 Truncate table or clear folder, SQL pre/post actions, database update methods
 Choose how the written data is partitioned
 Use current partitioning is almost always fastest
 Note: Writing to single file can be very slow with large amounts
of data
Mapping data flow expression language
Visual expression builder
List of columns
being modified
All available functions,
fields, parameters, local
variables, cached sinks …
Build expressions
here with full
auto-complete
and syntax
checking
View results of your
expression in the data
preview pane with
live, interactive
results
Name of the
current column
you are modifying
Expression language
 Expressions are built using the data flow expression language
 Expressions can reference:
 Built-in expression functions
 Defined input schema columns
 Data flow parameters
 Literals
 Certain transformations have unique functions
 Count(), sum() in Aggregate, denseRank() in Window, etc
 Evaluates to spark data types
Debug mode
 Quickly verify logic during development on small interactive cluster
 4 core, 60-minute time to live
 Enables the following:
 Get data preview snapshot at each transformation
 Preview output of expression in expression builder
 Run debug pipeline with no spin up
 Import Spark projection of source schema
 Rule of thumb: If developing Data Flows, turn on right away
 Initial 3-5-minute start up time
Debug mode: data preview
Debug mode: data profiling
Debug mode: expression output
Parameterizing data flows
 Both dataset properties and data-flow expressions can be
parameterized
 Passed in via data flow activity
 Can use data flow or pipeline expression language
 Expressions can reference $parameterName
 Can be literal values or column references
Referencing data flow parameters
Working with flexible schemas
Schema drift
 In real-world data integration solutions, source/target data stores
change shape
 Source data fields can change names
 Number of columns can change over time
 Traditional ETL processes break when schemas drift
 Mapping data flow has built-in handling for flexible schemas
 Patterns, rule-based mappings, byName(s) function, etc
 Source: Read additional columns on top of what is defined in the source schema
 Sink: Write additional columns on top of what is defined in the sink schema
Column pattern matching
 Match by name, type, stream, position
Rule-based mapping
Operationalizing and monitoring data flows
Data flow activity
 Run as activity in pipeline
 Integrated with existing ADF control flow, scheduling, orchestration, montoring, C/ICD
 Choose which integration runtime (IR) to run on
 # of cores, compute type, cluster time to live
 Assign parameters
Data flow integration runtime
 Integrated with existing Azure IR
 Choose compute type, # of cores, time to live
 Time to live: time a cluster is alive after last execution concludes
 Minimal start up time for sequential data flows
 Parameterize compute type, # of cores if using Auto Resolve
Monitoring data flows
Data flow security considerations
 All data stays inside VMs that run the Databricks cluster which are
spun up JIT for each job
• Azure Databricks attaches storage to the VMs for logging and spill-over from in-memory data frames
during job operation. These storage accounts are fully encrypted and within the Microsoft tenant.
• Each cluster is single-tenant and specific to your data and job. This cluster is not shared with any
other tenant
• Data flow processes are completely ephemeral. Once a job is completed,
all associated resources are destroyed
• Both cluster and storage account are deleted
• Data transfers in data flows are protected using certificates
• Active telemetry is logged and maintained for 45 days for troubleshooting
by the Azure Data Factory team
Data flow best practices and optimizations
Best practices – Lifecycle
1. Test your transformation logic using debug mode and data
preview
 Limited source size or use sample files
2. Test end-to-end pipeline logic using pipeline debug
 Verify data is read/written correctly
 Used as smoke test before merging your changes
3. Publish and trigger your pipelines within a Dev Factory
 Test performance and cluster size
4. Promote pipelines to higher environments such as UAT and PROD
using CI/CD
 Increase size and scope of data as you get to higher environments
Best practices – Debug (Data Preview)
 Data Preview
 Data preview is inside the data flow designer transformation properties
 Uses row limits and sampling techniques to preview data from a small size of data
 Allows you to build and validate units of logic with samples of data in real time
 You have control over the size of the data limits under Debug Settings
 If you wish to test with larger datasets, set a larger compute size in the Azure IR when
switching on “Debug Mode”
 Data Preview is only a snapshot of data in memory from Spark data frames. This feature does
not write any data, so the sink drivers are not utilized and not tested in this mode.
Best practices – Debug (Pipeline Debug)
 Pipeline Debug
 Click debug button to test your data flow inside of a pipeline
 Default debug limits the execution runtime so you will want to limit data sizes
 Sampling can be applied here as well by using the “Enable Sampling” option in each Source
 Use the debug button option of “use activity IR” when you wish to use a job execution
compute environment
 This option is good for debugging with larger datasets. It will not have the same execution timeout limit as the
default debug setting
Optimizing data flows
 Transformation order generally does not matter
 Data flows have a Spark optimizer that reorders logic to perform as best as it can
 Repartitioning and reshuffling data negates optimizer
 Each transformation has ‘Optimize’ tab to control partitioning
strategies
 Generally do not need to alter
 Altering cluster size and type has performance impact
 Four components
1. Cluster startup time
2. Reading from sources
3. Transformation time
4. Writing to sinks
Identifying bottlenecks
1. Cluster startup time
2. Sink processing time
3. Source read time
4. Transformation stage time
1. Sequential executions can
lower the cluster startup time
by setting a TTL in Azure IR
2. Total time to process the
stream from source to sink.
There is also a post-processing
time when you click on the Sink
that will show you how much
time Spark had to spend with
partition and job clean-up.
Write to single file and slow
database connections will
increase this time
3. Shows you how long it took to
read data from source.
Optimize with different source
partition strategies
4. This will show you bottlenecks
in your transformation logic.
With larger general purpose
and mem optimized IRs, most
of these operations occur in
memory in data frames and are
usually the fastest operations
in your data flow
Best practices - Sources
 When reading from file-based sources, data flow automatically
partitions the data based on size
 ~128 MB per partition, evenly distributed
 Use current partitioning will be fastest for file-based and Synapse using PolyBase
 Enable staging for Synapse
 For Azure SQL DB, use Source partitioning on column with high
cardinality
 Improves performance, but can saturate your source database
 Reading can be limited by the I/O of your source
Optimizing transformations
 Each transformation has its own optimize tab
 Generally better to not alter -> reshuffling is a relatively slow process
 Reshuffling can be useful if data is very skewed
 One node has a disproportionate amount of data
 For Joins, Exists and Lookups:
 If you have a lot, memory optimized greatly increases performance
 Can ‘Broadcast’ if the data on one side is small
 Rule of thumb: Less than 50k rows
 Increasing integration runtime can speed up transformations
 Transformations that require reshuffling like Sort negatively impact
performance
Best practices - Sinks
 SQL:
 Disable indexes on target with pre/post SQL scripts
 Increase SQL capacity during pipeline execution
 Enable staging when using Synapse
 File-based sinks:
 Use current partitioning allows Spark to create output
 Output to single file is a very slow operation
 Combines data into single partition
 Often unnecessary by whoever is consuming data
 Can set naming patterns or use data in column
 Any reshuffling of data is slow
 Cosmos DB
 Set throughput and batch size to meet performance requirements
Azure Integration Runtime
 Data Flows use JIT compute to minimize running expensive clusters
when they are mostly idle
 Generally more economical, but each cluster takes ~4 minutes to spin up
 IR specifies what cluster type and core-count to use
 Memory optimized is best, general purpose
 When running Sequential jobs utilize Time to Live to reuse cluster
between executions
 Keeps cluster alive for TTL minutes after execution for new job to use
 Maximum one job per cluster
 Rule of thumb: start small and scale up
Data flow script
Data flow script (DFS)
 DFS defines the logical intent of your data transformations
 Script is bundled and marshalled to Spark cluster as a job for
execution
 DFS can be auto-generated and used for programmatic creation of
data flows
 Access script behind UI via “Script” button
 Click “Copy as Single Line” to save version of script that is ready for
JSON
 https://docs.microsoft.com/en-us/azure/data-factory/data-flow-
script
Data flow script (DFS)
Source projection (1)
Source properties
Unpivot transformation (3)
Aggregate Transformation (2)
Sort (4)
Sink (5)
1 2 3 4 5
• Syntax: input_name transform_type(properties)
~> stream_name
Data flow script (DFS)
1 2
3
4
5
6
Source projection (1)
Source properties
Distinct Aggregate (3)
Row Count Agg (5)
Select transformation mappings (2)
Select properties (2)
Row Count Agg (4)
Sink transformation (6)
• ~> name_of_transform
• New branch does not require any script
element
ETL Migrations
ETL Tool Migration Overview
 Migrating from an existing large enterprise ETL installation to ADF and data flows requires
adherence to a formal methodology that incorporates classic SDLC, change management,
project management, and a deep understanding of your current data estate and ETL
requirements.
 Successful migration projects require project plans, executive sponsorship, budget, and a
dedicated team to focus on rebuilding the ETL in ADF.
 For existing on-prem ETL estates, it is very important to learn basics of Cloud, Azure, and ADF
generally before taking this Data Flows training.
Sponsorship
Discovery
Training
• On-prem to Cloud, Azure general training, ADF general training, Data Flows training
• A general understanding of the different between legacy client/server on-prem ETL
architectures and cloud-based Big Data processing is required
• ADF and Data Flows execute on Spark, so learn the fundamentals of the different between
row-by-row processing on a local server and batch/distributed computing on Spark in the
Cloud
Execution
• Start with the top 10 mission-critical ETL mappings and list out the primary logical goals and
steps achieved in each
• Use sample data and debug each scenario as new pipelines and data flows in ADF
• UAT each of those 10 mappings in ADF using sample data
• Lay out end-to-end project plan for remaining mapping migrations
• Plan the remainder of the project into quarterly calendar milestones
• Except each phase to take around 3 months
• Majority of large existing ETL infrastructure modernization migrations take 12-18 months to
complete
ETL System Integrator Partners
 Bitwise Global
 https://www.bitwiseglobal.com/webinars/automated-etl-conversion-to-adf-for-accelerated-data-warehouse-
migration-on-azure/
 Next Pathway
 https://blog.nextpathway.com/next-pathway-adds-ground-breaking-capability-to-translate-
informatica-and-datastage-etl-pipelines-to-the-cloud-in-latest-version-of-shift

Weitere ähnliche Inhalte

Was ist angesagt?

Azure Data Factory Data Flow
Azure Data Factory Data FlowAzure Data Factory Data Flow
Azure Data Factory Data FlowMark Kromer
 
Databricks + Snowflake: Catalyzing Data and AI Initiatives
Databricks + Snowflake: Catalyzing Data and AI InitiativesDatabricks + Snowflake: Catalyzing Data and AI Initiatives
Databricks + Snowflake: Catalyzing Data and AI InitiativesDatabricks
 
Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Databricks
 
Azure Synapse Analytics Overview (r2)
Azure Synapse Analytics Overview (r2)Azure Synapse Analytics Overview (r2)
Azure Synapse Analytics Overview (r2)James Serra
 
Modern Data Warehouse with Azure Synapse.pdf
Modern Data Warehouse with Azure Synapse.pdfModern Data Warehouse with Azure Synapse.pdf
Modern Data Warehouse with Azure Synapse.pdfKeyla Dolores Méndez
 
Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)
Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)
Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)Cathrine Wilhelmsen
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
 
Introduction to Azure Databricks
Introduction to Azure DatabricksIntroduction to Azure Databricks
Introduction to Azure DatabricksJames Serra
 
Azure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the CloudAzure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the CloudMark Kromer
 
Azure Data Factory Data Flows Training v005
Azure Data Factory Data Flows Training v005Azure Data Factory Data Flows Training v005
Azure Data Factory Data Flows Training v005Mark Kromer
 
1- Introduction of Azure data factory.pptx
1- Introduction of Azure data factory.pptx1- Introduction of Azure data factory.pptx
1- Introduction of Azure data factory.pptxBRIJESH KUMAR
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkIke Ellis
 
Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)
Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)
Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)Cathrine Wilhelmsen
 
(STG401) Amazon S3 Deep Dive & Best Practices
(STG401) Amazon S3 Deep Dive & Best Practices(STG401) Amazon S3 Deep Dive & Best Practices
(STG401) Amazon S3 Deep Dive & Best PracticesAmazon Web Services
 

Was ist angesagt? (20)

Azure Data Factory Data Flow
Azure Data Factory Data FlowAzure Data Factory Data Flow
Azure Data Factory Data Flow
 
Azure Data Engineering.pptx
Azure Data Engineering.pptxAzure Data Engineering.pptx
Azure Data Engineering.pptx
 
Databricks + Snowflake: Catalyzing Data and AI Initiatives
Databricks + Snowflake: Catalyzing Data and AI InitiativesDatabricks + Snowflake: Catalyzing Data and AI Initiatives
Databricks + Snowflake: Catalyzing Data and AI Initiatives
 
Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0Achieving Lakehouse Models with Spark 3.0
Achieving Lakehouse Models with Spark 3.0
 
Introduction to AWS Glue
Introduction to AWS GlueIntroduction to AWS Glue
Introduction to AWS Glue
 
Azure Synapse Analytics Overview (r2)
Azure Synapse Analytics Overview (r2)Azure Synapse Analytics Overview (r2)
Azure Synapse Analytics Overview (r2)
 
Introduction to Amazon Athena
Introduction to Amazon AthenaIntroduction to Amazon Athena
Introduction to Amazon Athena
 
Modern Data Warehouse with Azure Synapse.pdf
Modern Data Warehouse with Azure Synapse.pdfModern Data Warehouse with Azure Synapse.pdf
Modern Data Warehouse with Azure Synapse.pdf
 
Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)
Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)
Building Dynamic Pipelines in Azure Data Factory (SQLSaturday Oslo)
 
AWS glue technical enablement training
AWS glue technical enablement trainingAWS glue technical enablement training
AWS glue technical enablement training
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
 
Introduction to Azure Databricks
Introduction to Azure DatabricksIntroduction to Azure Databricks
Introduction to Azure Databricks
 
Azure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the CloudAzure Data Factory ETL Patterns in the Cloud
Azure Data Factory ETL Patterns in the Cloud
 
Azure Data Factory Data Flows Training v005
Azure Data Factory Data Flows Training v005Azure Data Factory Data Flows Training v005
Azure Data Factory Data Flows Training v005
 
1- Introduction of Azure data factory.pptx
1- Introduction of Azure data factory.pptx1- Introduction of Azure data factory.pptx
1- Introduction of Azure data factory.pptx
 
Azure Databricks is Easier Than You Think
Azure Databricks is Easier Than You ThinkAzure Databricks is Easier Than You Think
Azure Databricks is Easier Than You Think
 
Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)
Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)
Pipelines and Packages: Introduction to Azure Data Factory (DATA:Scotland 2019)
 
(STG401) Amazon S3 Deep Dive & Best Practices
(STG401) Amazon S3 Deep Dive & Best Practices(STG401) Amazon S3 Deep Dive & Best Practices
(STG401) Amazon S3 Deep Dive & Best Practices
 
Azure Synapse Analytics
Azure Synapse AnalyticsAzure Synapse Analytics
Azure Synapse Analytics
 
Modern data warehouse
Modern data warehouseModern data warehouse
Modern data warehouse
 

Ähnlich wie Azure Data Factory Mapping Data Flows

Azure Data Factory Data Flows Training (Sept 2020 Update)
Azure Data Factory Data Flows Training (Sept 2020 Update)Azure Data Factory Data Flows Training (Sept 2020 Update)
Azure Data Factory Data Flows Training (Sept 2020 Update)Mark Kromer
 
SSIS 2008 R2 data flow
SSIS 2008 R2 data flowSSIS 2008 R2 data flow
SSIS 2008 R2 data flowSlava Kokaev
 
White jason presentation
White jason presentationWhite jason presentation
White jason presentationWhiteJason
 
6.1\9 SSIS 2008R2_Training - DataFlow Transformations
6.1\9 SSIS 2008R2_Training - DataFlow Transformations6.1\9 SSIS 2008R2_Training - DataFlow Transformations
6.1\9 SSIS 2008R2_Training - DataFlow TransformationsPramod Singla
 
MIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome MeasuresMIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome MeasuresSteven Johnson
 
ASP.NET 3.5 SP1
ASP.NET 3.5 SP1ASP.NET 3.5 SP1
ASP.NET 3.5 SP1Dave Allen
 
Syntactic Mediation in Grid and Web Service Architectures
Syntactic Mediation in Grid and Web Service ArchitecturesSyntactic Mediation in Grid and Web Service Architectures
Syntactic Mediation in Grid and Web Service ArchitecturesMartin Szomszor
 
Tech Days09 Sqldev
Tech Days09 SqldevTech Days09 Sqldev
Tech Days09 Sqldevllangit
 
SQL Server 2008 for Developers
SQL Server 2008 for DevelopersSQL Server 2008 for Developers
SQL Server 2008 for Developersllangit
 
SQL Server 2008 for .NET Developers
SQL Server 2008 for .NET DevelopersSQL Server 2008 for .NET Developers
SQL Server 2008 for .NET Developersllangit
 
Large-Scale Distributed Storage System for Business Provenance - Cloud 2011
Large-Scale Distributed Storage System for Business Provenance - Cloud 2011Large-Scale Distributed Storage System for Business Provenance - Cloud 2011
Large-Scale Distributed Storage System for Business Provenance - Cloud 2011Szabolcs Rozsnyai
 
Informatica overview
Informatica overviewInformatica overview
Informatica overviewkarthik kumar
 
Informatica overview
Informatica overviewInformatica overview
Informatica overviewkarthik kumar
 
Informatica Designer Module
Informatica Designer ModuleInformatica Designer Module
Informatica Designer Moduleganblues
 
Sas clinical training
Sas clinical trainingSas clinical training
Sas clinical trainingVasudha India
 
Big data technology unit 3
Big data technology unit 3Big data technology unit 3
Big data technology unit 3RojaT4
 

Ähnlich wie Azure Data Factory Mapping Data Flows (20)

Azure Data Factory Data Flows Training (Sept 2020 Update)
Azure Data Factory Data Flows Training (Sept 2020 Update)Azure Data Factory Data Flows Training (Sept 2020 Update)
Azure Data Factory Data Flows Training (Sept 2020 Update)
 
SSIS 2008 R2 data flow
SSIS 2008 R2 data flowSSIS 2008 R2 data flow
SSIS 2008 R2 data flow
 
Cis266 final review
Cis266 final reviewCis266 final review
Cis266 final review
 
White jason presentation
White jason presentationWhite jason presentation
White jason presentation
 
6.1\9 SSIS 2008R2_Training - DataFlow Transformations
6.1\9 SSIS 2008R2_Training - DataFlow Transformations6.1\9 SSIS 2008R2_Training - DataFlow Transformations
6.1\9 SSIS 2008R2_Training - DataFlow Transformations
 
MIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome MeasuresMIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome Measures
 
ASP.NET 3.5 SP1
ASP.NET 3.5 SP1ASP.NET 3.5 SP1
ASP.NET 3.5 SP1
 
AWS RDS Migration Tool
AWS RDS Migration Tool AWS RDS Migration Tool
AWS RDS Migration Tool
 
Syntactic Mediation in Grid and Web Service Architectures
Syntactic Mediation in Grid and Web Service ArchitecturesSyntactic Mediation in Grid and Web Service Architectures
Syntactic Mediation in Grid and Web Service Architectures
 
Tech Days09 Sqldev
Tech Days09 SqldevTech Days09 Sqldev
Tech Days09 Sqldev
 
SQL Server 2008 for Developers
SQL Server 2008 for DevelopersSQL Server 2008 for Developers
SQL Server 2008 for Developers
 
SQL Server 2008 for .NET Developers
SQL Server 2008 for .NET DevelopersSQL Server 2008 for .NET Developers
SQL Server 2008 for .NET Developers
 
Large-Scale Distributed Storage System for Business Provenance - Cloud 2011
Large-Scale Distributed Storage System for Business Provenance - Cloud 2011Large-Scale Distributed Storage System for Business Provenance - Cloud 2011
Large-Scale Distributed Storage System for Business Provenance - Cloud 2011
 
Informatica overview
Informatica overviewInformatica overview
Informatica overview
 
Informatica overview
Informatica overviewInformatica overview
Informatica overview
 
Os Lonergan
Os LonerganOs Lonergan
Os Lonergan
 
Informatica Designer Module
Informatica Designer ModuleInformatica Designer Module
Informatica Designer Module
 
Sas clinical training
Sas clinical trainingSas clinical training
Sas clinical training
 
Chapter.07
Chapter.07Chapter.07
Chapter.07
 
Big data technology unit 3
Big data technology unit 3Big data technology unit 3
Big data technology unit 3
 

Mehr von Mark Kromer

Fabric Data Factory Pipeline Copy Perf Tips.pptx
Fabric Data Factory Pipeline Copy Perf Tips.pptxFabric Data Factory Pipeline Copy Perf Tips.pptx
Fabric Data Factory Pipeline Copy Perf Tips.pptxMark Kromer
 
Build data quality rules and data cleansing into your data pipelines
Build data quality rules and data cleansing into your data pipelinesBuild data quality rules and data cleansing into your data pipelines
Build data quality rules and data cleansing into your data pipelinesMark Kromer
 
Data cleansing and data prep with synapse data flows
Data cleansing and data prep with synapse data flowsData cleansing and data prep with synapse data flows
Data cleansing and data prep with synapse data flowsMark Kromer
 
Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mark Kromer
 
Data Lake ETL in the Cloud with ADF
Data Lake ETL in the Cloud with ADFData Lake ETL in the Cloud with ADF
Data Lake ETL in the Cloud with ADFMark Kromer
 
Azure Data Factory Data Wrangling with Power Query
Azure Data Factory Data Wrangling with Power QueryAzure Data Factory Data Wrangling with Power Query
Azure Data Factory Data Wrangling with Power QueryMark Kromer
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Mark Kromer
 
Data Quality Patterns in the Cloud with ADF
Data Quality Patterns in the Cloud with ADFData Quality Patterns in the Cloud with ADF
Data Quality Patterns in the Cloud with ADFMark Kromer
 
Data quality patterns in the cloud with ADF
Data quality patterns in the cloud with ADFData quality patterns in the cloud with ADF
Data quality patterns in the cloud with ADFMark Kromer
 
ADF Mapping Data Flows Level 300
ADF Mapping Data Flows Level 300ADF Mapping Data Flows Level 300
ADF Mapping Data Flows Level 300Mark Kromer
 
ADF Mapping Data Flows Training V2
ADF Mapping Data Flows Training V2ADF Mapping Data Flows Training V2
ADF Mapping Data Flows Training V2Mark Kromer
 
ADF Mapping Data Flows Training Slides V1
ADF Mapping Data Flows Training Slides V1ADF Mapping Data Flows Training Slides V1
ADF Mapping Data Flows Training Slides V1Mark Kromer
 
ADF Mapping Data Flow Private Preview Migration
ADF Mapping Data Flow Private Preview MigrationADF Mapping Data Flow Private Preview Migration
ADF Mapping Data Flow Private Preview MigrationMark Kromer
 
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the CloudSQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the CloudMark Kromer
 
Azure Data Factory Data Flow Limited Preview for January 2019
Azure Data Factory Data Flow Limited Preview for January 2019Azure Data Factory Data Flow Limited Preview for January 2019
Azure Data Factory Data Flow Limited Preview for January 2019Mark Kromer
 
Microsoft Azure Data Factory Data Flow Scenarios
Microsoft Azure Data Factory Data Flow ScenariosMicrosoft Azure Data Factory Data Flow Scenarios
Microsoft Azure Data Factory Data Flow ScenariosMark Kromer
 
Azure Data Factory Data Flow Preview December 2019
Azure Data Factory Data Flow Preview December 2019Azure Data Factory Data Flow Preview December 2019
Azure Data Factory Data Flow Preview December 2019Mark Kromer
 
Azure Data Factory for Azure Data Week
Azure Data Factory for Azure Data WeekAzure Data Factory for Azure Data Week
Azure Data Factory for Azure Data WeekMark Kromer
 
Azure Data Factory for Redmond SQL PASS UG Sept 2018
Azure Data Factory for Redmond SQL PASS UG Sept 2018Azure Data Factory for Redmond SQL PASS UG Sept 2018
Azure Data Factory for Redmond SQL PASS UG Sept 2018Mark Kromer
 
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...Mark Kromer
 

Mehr von Mark Kromer (20)

Fabric Data Factory Pipeline Copy Perf Tips.pptx
Fabric Data Factory Pipeline Copy Perf Tips.pptxFabric Data Factory Pipeline Copy Perf Tips.pptx
Fabric Data Factory Pipeline Copy Perf Tips.pptx
 
Build data quality rules and data cleansing into your data pipelines
Build data quality rules and data cleansing into your data pipelinesBuild data quality rules and data cleansing into your data pipelines
Build data quality rules and data cleansing into your data pipelines
 
Data cleansing and data prep with synapse data flows
Data cleansing and data prep with synapse data flowsData cleansing and data prep with synapse data flows
Data cleansing and data prep with synapse data flows
 
Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021Mapping Data Flows Perf Tuning April 2021
Mapping Data Flows Perf Tuning April 2021
 
Data Lake ETL in the Cloud with ADF
Data Lake ETL in the Cloud with ADFData Lake ETL in the Cloud with ADF
Data Lake ETL in the Cloud with ADF
 
Azure Data Factory Data Wrangling with Power Query
Azure Data Factory Data Wrangling with Power QueryAzure Data Factory Data Wrangling with Power Query
Azure Data Factory Data Wrangling with Power Query
 
Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101Azure Data Factory Data Flow Performance Tuning 101
Azure Data Factory Data Flow Performance Tuning 101
 
Data Quality Patterns in the Cloud with ADF
Data Quality Patterns in the Cloud with ADFData Quality Patterns in the Cloud with ADF
Data Quality Patterns in the Cloud with ADF
 
Data quality patterns in the cloud with ADF
Data quality patterns in the cloud with ADFData quality patterns in the cloud with ADF
Data quality patterns in the cloud with ADF
 
ADF Mapping Data Flows Level 300
ADF Mapping Data Flows Level 300ADF Mapping Data Flows Level 300
ADF Mapping Data Flows Level 300
 
ADF Mapping Data Flows Training V2
ADF Mapping Data Flows Training V2ADF Mapping Data Flows Training V2
ADF Mapping Data Flows Training V2
 
ADF Mapping Data Flows Training Slides V1
ADF Mapping Data Flows Training Slides V1ADF Mapping Data Flows Training Slides V1
ADF Mapping Data Flows Training Slides V1
 
ADF Mapping Data Flow Private Preview Migration
ADF Mapping Data Flow Private Preview MigrationADF Mapping Data Flow Private Preview Migration
ADF Mapping Data Flow Private Preview Migration
 
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the CloudSQL Saturday Redmond 2019 ETL Patterns in the Cloud
SQL Saturday Redmond 2019 ETL Patterns in the Cloud
 
Azure Data Factory Data Flow Limited Preview for January 2019
Azure Data Factory Data Flow Limited Preview for January 2019Azure Data Factory Data Flow Limited Preview for January 2019
Azure Data Factory Data Flow Limited Preview for January 2019
 
Microsoft Azure Data Factory Data Flow Scenarios
Microsoft Azure Data Factory Data Flow ScenariosMicrosoft Azure Data Factory Data Flow Scenarios
Microsoft Azure Data Factory Data Flow Scenarios
 
Azure Data Factory Data Flow Preview December 2019
Azure Data Factory Data Flow Preview December 2019Azure Data Factory Data Flow Preview December 2019
Azure Data Factory Data Flow Preview December 2019
 
Azure Data Factory for Azure Data Week
Azure Data Factory for Azure Data WeekAzure Data Factory for Azure Data Week
Azure Data Factory for Azure Data Week
 
Azure Data Factory for Redmond SQL PASS UG Sept 2018
Azure Data Factory for Redmond SQL PASS UG Sept 2018Azure Data Factory for Redmond SQL PASS UG Sept 2018
Azure Data Factory for Redmond SQL PASS UG Sept 2018
 
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
Microsoft Build 2018 Analytic Solutions with Azure Data Factory and Azure SQL...
 

Kürzlich hochgeladen

The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 

Kürzlich hochgeladen (20)

The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 

Azure Data Factory Mapping Data Flows

  • 1. Azure Data Factory: Mapping Data Flows
  • 2. What are mapping data flows?  Code-free data transformation at scale  Serverless, scaled-out, ADF-managed Apache Spark™ engine  Resilient flows handle structured and unstructured data  Operationalized as an ADF pipeline activity
  • 3. Code-free data transformation at scale  Intuitive UX lets you focus on building transformation logic  Data cleansing  Data validation  Data aggregation  No requirement of knowing Spark, cluster management, Scala, Python, etc vs
  • 4. Mapping Data Flows Service Architecture Data Flow Designer UI Data flow script Data Flow execution plan monitor Design, debug, manage data transformation logic visual in browser UI The UI builds data transformation script that contains metadata and logical intent from user. Script payload is combined with ADF JSON pipeline definition. ADF spins-up a JIT on-demand Spark cluster and sends script to Spark job for execution on scaled-out cluster. ADF Azure IR configurations define Spark cluster size for executor job. After executing your job, ADF presents execution plan indicating partitioning, timings, data distribution, and data lineage details
  • 5. INGEST Modern Data Warehouse (MDW) PREPARE TRANSFORM, PREDICT & ENRICH SERVE STORE VISUALIZE On-premises data Cloud data SaaS data Data Pipeline Orchestration & Monitoring
  • 6. Common Data Flow Customer Scenarios
  • 7. Common Data Warehouse and Data Analytics Scenarios w/Mapping Data Flows • DW Scenarios • Slowly changing dimensions • Late arriving dimensions • Fact table loading • Data Analytics Scenarios • Verify data types and lengths • NULL handling • Domain value constraints • Lookups • PII masking • Data analytics / aggregations
  • 8. Data Engineer and Data Scientist Scenarios 1. Data deduping 2. Descriptive data statistics (data profiling) • Length, type, mean, median, average, stddev … 3. Frequency distribution 4. Missing values 5. Enumerations / Lookups 6. Value replacement 7. Metadata validation
  • 10. Replacing Values • iif (length(title) == 0,toString(null()),title)
  • 11. Splitting Data Based on Values
  • 14. Data De-Duplication and Distinct Rows https://docs.microsoft.com/en-us/azure/data-factory/how-to-data-flow-dedupe-nulls-snippets
  • 17. Fact loading into data warehouse
  • 19. Execute data wrangling at scale with embedded PQ experience in pipelines
  • 22. Building transformation logic  Transformations: A ‘step’ in the data flow  Engine intelligently groups them at runtime  19 currently available  Core logic of data flow  Add/Remove/Alter Columns  Join or lookup data from datasets  Change number or order of rows  Aggregate data  Hierarchal to relational
  • 23. Source transformation  Define the data read by your data flow  Import projection vs generic  Schema drift  Connector specific properties and optimizations  Min: 1, Max: ∞  Define in-line or use dataset
  • 24. Source: In-line vs dataset  Define all source properties within a data flow or use a separate entity to store them  Dataset:  Reusable in other ADF activities such as Copy  Not based in Spark -> some settings overridden  In-line  Useful when using flexible schemas, one-off source instances or parameterized sources  Do not need “dummy” dataset object  Based in Spark, properties native to data flow
  • 25. Duplicating data streams  Duplicate data stream from any stage of your data flow  Select ‘New branch’  Operate on same data with different transformation requirements  Self-joins  Writing to different sinks  Aggregating in one branch
  • 26. Joining two data streams together  Use Join transformation to append columns from incoming stream to any stream in your data flow  Join types: full outer, inner, left outer, right outer, cross  SQL Join equivalent  Match on computed columns or use non-equality conditions  Broadcast small data streams to cache data and improve performance
  • 27. Lookup transformation  Similar to left outer join, but with more functionality  All incoming rows are passed through regardless of match  Matching conditions same as a join  Multi or single row lookup  Match on all, first, last, or any row that meets join conditions  isMatch() function can be used in downstream transformations to verify output
  • 28. Exists transformation  Check for existence of a value in another stream  SQL Exists equivalent  See if any row matches in a subquery, just like SQL  Filter based on join matching conditions  Choose Exist or Not Exist for your filter conditions  Can specify a custom expressoin
  • 29. Union transformation  Combine rows from multiple streams  Add as many streams as needed  Combine data based upon column name or ordinal column position  Use cases:  Similar data from different connection that undergo same transformations  Writing multiple data streams into the same sink
  • 30. Conditional split  Split data into separate streams based upon conditions  Use data flow expression language to evaluate boolean  Use cases:  Sinking subset of data to different locations  Perform different calculations on data depending on a set of values
  • 31. Derived column  Transform data at row and column level using expression language  Generate new or modify existing columns  Build expressions using the expression builder  Handle structured or unstructured data  Use column patterns to match on rules and regular expressions  Can be used to transform multiple columns in bulk  Most heavily used transformation
  • 32. Select transformation  Metadata and column maintenance  SQL Select statement  Alias or renames data stream and columns  Prune unwanted or duplicate columns  Common after joins and lookups  Rule-based mapping for flexible schemas, bulk mapping  Map hierarchal columns to flat structure
  • 33. Surrogate key transformation  Generate incrementing key to use as a non-business key in your data  To seed the starting value of your surrogate key, use derived column and a lookup from an existing table  Examples are in documentation  Useful for generating keys for star schema dimension tables  More performant than using the Window transformation with RowNumber() function
  • 34. Aggregate transformation  Aggregate data into groups using aggregate function  Like SQL GROUP BY clause in a Select statement  Aggregate functions include sum(), max(), avg(), first(), collect()  Choose columns to group by  One row for each unique group by column value  Only columns used in transformation are in output data stream  Use self-join to append to existing data  Supports pattern matching
  • 35. Pivot and unpivot transformations  Pivot row values into new columns and vice-versa  Both are aggregate transformations that require aggregate functions  If pivot key values not specified, all columns become drifted  Use map drifted quick action to add to schema quickly
  • 36. Window transformation  Aggregates data across “windows” of data partitions  Used to compare a row of data against others in its ‘group’  Group determined by group by columns, sorting conditions and range bounds  Used for ranking rows in a group and getting lead/lag  Sorting causes reshuffling of data  “Expensive” operation
  • 37. Filter transformation  Filter rows based upon an expression  Like SQL WHERE clause  Expressions return true or false
  • 38. Alter row transformation  Mark rows as Insert, Update, Delete, or Upsert  Like SQL MERGE statement  Insert by default  Define policies to update your database  Works with SQL DB, Synapse, Cosmos DB, and Delta Lake  Specify allowed update methods in each sink
  • 39. Flatten transformation  Unroll array values into individual rows  One row per value  Used to convert hierarchies to flat structures  Opposite of collect() aggregate function
  • 40. Parse transformation  Turn string columns that have complex formats into JSON or XML
  • 41. Sort transformation  Sort your data by column values  SQL Order By equivalent  Use sparingly: Reshuffles and coalesces data  Reduces effectiveness of data partitioning  Does not optimize speed like legacy ETL tools  Useful for data exploration and validation
  • 42. Rank transformation  Rank and dense rank options  Recommended instead of using Window transformation with Rank function  More performant than Window rank() and denseRank()
  • 43. Stringify transformation  Turn complex data types into strings
  • 44. Assert transformation  Build data quality and data validation rules  Set data expectations  If column/row values do not pass assertion, assign an ID and a description  Can include row/column values in descriptions
  • 45. Sink transformation  Define the properties for landing your data in your destination target data store  Define using dataset or in-line  Can map columns similar to select transformation  Import schema definition from destination  Set actions on destinations  Truncate table or clear folder, SQL pre/post actions, database update methods  Choose how the written data is partitioned  Use current partitioning is almost always fastest  Note: Writing to single file can be very slow with large amounts of data
  • 46. Mapping data flow expression language
  • 47. Visual expression builder List of columns being modified All available functions, fields, parameters, local variables, cached sinks … Build expressions here with full auto-complete and syntax checking View results of your expression in the data preview pane with live, interactive results Name of the current column you are modifying
  • 48. Expression language  Expressions are built using the data flow expression language  Expressions can reference:  Built-in expression functions  Defined input schema columns  Data flow parameters  Literals  Certain transformations have unique functions  Count(), sum() in Aggregate, denseRank() in Window, etc  Evaluates to spark data types
  • 49. Debug mode  Quickly verify logic during development on small interactive cluster  4 core, 60-minute time to live  Enables the following:  Get data preview snapshot at each transformation  Preview output of expression in expression builder  Run debug pipeline with no spin up  Import Spark projection of source schema  Rule of thumb: If developing Data Flows, turn on right away  Initial 3-5-minute start up time
  • 50. Debug mode: data preview
  • 51. Debug mode: data profiling
  • 53. Parameterizing data flows  Both dataset properties and data-flow expressions can be parameterized  Passed in via data flow activity  Can use data flow or pipeline expression language  Expressions can reference $parameterName  Can be literal values or column references
  • 54. Referencing data flow parameters
  • 56. Schema drift  In real-world data integration solutions, source/target data stores change shape  Source data fields can change names  Number of columns can change over time  Traditional ETL processes break when schemas drift  Mapping data flow has built-in handling for flexible schemas  Patterns, rule-based mappings, byName(s) function, etc  Source: Read additional columns on top of what is defined in the source schema  Sink: Write additional columns on top of what is defined in the sink schema
  • 57. Column pattern matching  Match by name, type, stream, position
  • 60. Data flow activity  Run as activity in pipeline  Integrated with existing ADF control flow, scheduling, orchestration, montoring, C/ICD  Choose which integration runtime (IR) to run on  # of cores, compute type, cluster time to live  Assign parameters
  • 61. Data flow integration runtime  Integrated with existing Azure IR  Choose compute type, # of cores, time to live  Time to live: time a cluster is alive after last execution concludes  Minimal start up time for sequential data flows  Parameterize compute type, # of cores if using Auto Resolve
  • 63. Data flow security considerations  All data stays inside VMs that run the Databricks cluster which are spun up JIT for each job • Azure Databricks attaches storage to the VMs for logging and spill-over from in-memory data frames during job operation. These storage accounts are fully encrypted and within the Microsoft tenant. • Each cluster is single-tenant and specific to your data and job. This cluster is not shared with any other tenant • Data flow processes are completely ephemeral. Once a job is completed, all associated resources are destroyed • Both cluster and storage account are deleted • Data transfers in data flows are protected using certificates • Active telemetry is logged and maintained for 45 days for troubleshooting by the Azure Data Factory team
  • 64. Data flow best practices and optimizations
  • 65. Best practices – Lifecycle 1. Test your transformation logic using debug mode and data preview  Limited source size or use sample files 2. Test end-to-end pipeline logic using pipeline debug  Verify data is read/written correctly  Used as smoke test before merging your changes 3. Publish and trigger your pipelines within a Dev Factory  Test performance and cluster size 4. Promote pipelines to higher environments such as UAT and PROD using CI/CD  Increase size and scope of data as you get to higher environments
  • 66. Best practices – Debug (Data Preview)  Data Preview  Data preview is inside the data flow designer transformation properties  Uses row limits and sampling techniques to preview data from a small size of data  Allows you to build and validate units of logic with samples of data in real time  You have control over the size of the data limits under Debug Settings  If you wish to test with larger datasets, set a larger compute size in the Azure IR when switching on “Debug Mode”  Data Preview is only a snapshot of data in memory from Spark data frames. This feature does not write any data, so the sink drivers are not utilized and not tested in this mode.
  • 67. Best practices – Debug (Pipeline Debug)  Pipeline Debug  Click debug button to test your data flow inside of a pipeline  Default debug limits the execution runtime so you will want to limit data sizes  Sampling can be applied here as well by using the “Enable Sampling” option in each Source  Use the debug button option of “use activity IR” when you wish to use a job execution compute environment  This option is good for debugging with larger datasets. It will not have the same execution timeout limit as the default debug setting
  • 68. Optimizing data flows  Transformation order generally does not matter  Data flows have a Spark optimizer that reorders logic to perform as best as it can  Repartitioning and reshuffling data negates optimizer  Each transformation has ‘Optimize’ tab to control partitioning strategies  Generally do not need to alter  Altering cluster size and type has performance impact  Four components 1. Cluster startup time 2. Reading from sources 3. Transformation time 4. Writing to sinks
  • 69. Identifying bottlenecks 1. Cluster startup time 2. Sink processing time 3. Source read time 4. Transformation stage time 1. Sequential executions can lower the cluster startup time by setting a TTL in Azure IR 2. Total time to process the stream from source to sink. There is also a post-processing time when you click on the Sink that will show you how much time Spark had to spend with partition and job clean-up. Write to single file and slow database connections will increase this time 3. Shows you how long it took to read data from source. Optimize with different source partition strategies 4. This will show you bottlenecks in your transformation logic. With larger general purpose and mem optimized IRs, most of these operations occur in memory in data frames and are usually the fastest operations in your data flow
  • 70. Best practices - Sources  When reading from file-based sources, data flow automatically partitions the data based on size  ~128 MB per partition, evenly distributed  Use current partitioning will be fastest for file-based and Synapse using PolyBase  Enable staging for Synapse  For Azure SQL DB, use Source partitioning on column with high cardinality  Improves performance, but can saturate your source database  Reading can be limited by the I/O of your source
  • 71. Optimizing transformations  Each transformation has its own optimize tab  Generally better to not alter -> reshuffling is a relatively slow process  Reshuffling can be useful if data is very skewed  One node has a disproportionate amount of data  For Joins, Exists and Lookups:  If you have a lot, memory optimized greatly increases performance  Can ‘Broadcast’ if the data on one side is small  Rule of thumb: Less than 50k rows  Increasing integration runtime can speed up transformations  Transformations that require reshuffling like Sort negatively impact performance
  • 72. Best practices - Sinks  SQL:  Disable indexes on target with pre/post SQL scripts  Increase SQL capacity during pipeline execution  Enable staging when using Synapse  File-based sinks:  Use current partitioning allows Spark to create output  Output to single file is a very slow operation  Combines data into single partition  Often unnecessary by whoever is consuming data  Can set naming patterns or use data in column  Any reshuffling of data is slow  Cosmos DB  Set throughput and batch size to meet performance requirements
  • 73. Azure Integration Runtime  Data Flows use JIT compute to minimize running expensive clusters when they are mostly idle  Generally more economical, but each cluster takes ~4 minutes to spin up  IR specifies what cluster type and core-count to use  Memory optimized is best, general purpose  When running Sequential jobs utilize Time to Live to reuse cluster between executions  Keeps cluster alive for TTL minutes after execution for new job to use  Maximum one job per cluster  Rule of thumb: start small and scale up
  • 75. Data flow script (DFS)  DFS defines the logical intent of your data transformations  Script is bundled and marshalled to Spark cluster as a job for execution  DFS can be auto-generated and used for programmatic creation of data flows  Access script behind UI via “Script” button  Click “Copy as Single Line” to save version of script that is ready for JSON  https://docs.microsoft.com/en-us/azure/data-factory/data-flow- script
  • 76. Data flow script (DFS) Source projection (1) Source properties Unpivot transformation (3) Aggregate Transformation (2) Sort (4) Sink (5) 1 2 3 4 5 • Syntax: input_name transform_type(properties) ~> stream_name
  • 77. Data flow script (DFS) 1 2 3 4 5 6 Source projection (1) Source properties Distinct Aggregate (3) Row Count Agg (5) Select transformation mappings (2) Select properties (2) Row Count Agg (4) Sink transformation (6) • ~> name_of_transform • New branch does not require any script element
  • 79. ETL Tool Migration Overview  Migrating from an existing large enterprise ETL installation to ADF and data flows requires adherence to a formal methodology that incorporates classic SDLC, change management, project management, and a deep understanding of your current data estate and ETL requirements.  Successful migration projects require project plans, executive sponsorship, budget, and a dedicated team to focus on rebuilding the ETL in ADF.  For existing on-prem ETL estates, it is very important to learn basics of Cloud, Azure, and ADF generally before taking this Data Flows training.
  • 82. Training • On-prem to Cloud, Azure general training, ADF general training, Data Flows training • A general understanding of the different between legacy client/server on-prem ETL architectures and cloud-based Big Data processing is required • ADF and Data Flows execute on Spark, so learn the fundamentals of the different between row-by-row processing on a local server and batch/distributed computing on Spark in the Cloud
  • 83. Execution • Start with the top 10 mission-critical ETL mappings and list out the primary logical goals and steps achieved in each • Use sample data and debug each scenario as new pipelines and data flows in ADF • UAT each of those 10 mappings in ADF using sample data • Lay out end-to-end project plan for remaining mapping migrations • Plan the remainder of the project into quarterly calendar milestones • Except each phase to take around 3 months • Majority of large existing ETL infrastructure modernization migrations take 12-18 months to complete
  • 84. ETL System Integrator Partners  Bitwise Global  https://www.bitwiseglobal.com/webinars/automated-etl-conversion-to-adf-for-accelerated-data-warehouse- migration-on-azure/  Next Pathway  https://blog.nextpathway.com/next-pathway-adds-ground-breaking-capability-to-translate- informatica-and-datastage-etl-pipelines-to-the-cloud-in-latest-version-of-shift