SlideShare ist ein Scribd-Unternehmen logo
1 von 78
| © Copyright 2015 Hitachi Consulting1
Building the Data Lake
with Microsoft Azure Data Factory and Data Lake Analytics
Khalid M. Salama, Ph.D.
Business Insights & Analytics
Hitachi Consulting UK
We Make it Happen. Better.
| © Copyright 2015 Hitachi Consulting2
Outline
 What is a Data Lake?
 Data Lake Architecture & Technologies
 Data Ingestion – Populating the Data Lake
 Azure Data Lake Store Overview
 Introducing Azure Data Factory
 Data Lake Ingestion Scenarios with Azure Data Factory
 Introduction to Azure Data Lake Analytics
 Processing Data Using U-SQL + .net Extensibility
 Running U-SQL Jobs on Azure
| © Copyright 2015 Hitachi Consulting3
Data Lake – The Concept
What is that? And Why it is needed?
What is a Data lake
Why a Data Lake
A repository that holds
raw data file extracts of
all the Enterprise source
systems
Pentaho (Hitachi Data
Systems) co-founder and
CTO, James Dixon coined
the term ‘Data Lake’
| © Copyright 2015 Hitachi Consulting4
If you think of a datamart/DW as a store
of bottled water – cleansed, packaged
and structured for easy consumption –
the Data lake is a large body of water
in a more natural state.
The contents of the data lake, stream in
from many source to fill the lake, and
various users of the lake can come to
examine, dive in, or take samples.”
Data Lake – The Concept
What is that? And Why it is needed?
| © Copyright 2015 Hitachi Consulting5
Data Lake
Overall Architecture
Data Store
Data
CatalogRaw data files
Processed
datasets
Processed
datasets
| © Copyright 2015 Hitachi Consulting6
Data Lake
Overall Architecture
Data Ingestions
Data Store
Data
CatalogRaw data files
Processed
datasets
Processed
datasets
Stream Data Collectors Batch Data Collectors
Transformation
| © Copyright 2015 Hitachi Consulting7
Data Lake
Overall Architecture
Data Ingestions
Data Store
Big Data Processing
Data
Catalog
Transpose, Reform,
Filter, Extract, Aggregate
Machine
Learning
Analytics and
Discovery
Raw data files
Processed
datasets
Processed
datasets
Stream Data Collectors Batch Data Collectors
Transformation
| © Copyright 2015 Hitachi Consulting8
Data Lake
Overall Architecture
Data Ingestions
Data Store
Big Data Processing
Admin
Data
Catalog
Job Scheduler
Transpose, Reform,
Filter, Extract, Aggregate
Machine
Learning
Analytics and
Discovery
Raw data files
Processed
datasets
Processed
datasets
Management
Monitoring
Security
Stream Data Collectors Batch Data Collectors
Transformation
| © Copyright 2015 Hitachi Consulting9
Data Lake
Overall Architecture
Data Ingestions
Data Store
Big Data Processing
Admin
Data
Catalog
Job Scheduler
PublicInterfaces
Transpose, Reform,
Filter, Extract, Aggregate
Machine
Learning
Analytics and
Discovery
Raw data files
Processed
datasets
Processed
datasets
Management
Monitoring
Security
Stream Data Collectors Batch Data Collectors
Transformation
SQL Query
Engine
(ODBC, JDBC)
| © Copyright 2015 Hitachi Consulting10
Data Lake
Overall Architecture
Pull data from data sources using batch connectors E.g. SQL, NoSQL, File Systems, (s)FTP, etc.Batch Data Collectors
Consumes data streams in using real-time data connectors. E.g. queues (kafka, rabbitmq, MQTT,
eventshub, etc.), WebSockets, http, etc.Stream Data Collectors
Performs basic data transformation & enrichment, rather than processing, including format
conversion (encoding), compression, merge, spilt, etc.
Raw Data Files Data stored in files in its raw form (either in its original format, or a standardized format/compression
applied in the transformation stage) – XML, JSON, BSON, Text, Avro, Parquet, ORC, etc.
Processed Datasets The results of (big) data processing jobs, which is a set of tabular dataset (files) in a query-optimized
format
Data Catalog Usually a relational database that includes a catalog for the datasets in the data lake
Transformation
Big data processing jobs (MapReduce, Hive, Pig, Spark, U-SQL) that process the data and apply
advanced analytics, including ML and pattern discoveryData Processing Jobs
A scheduler that Orchestrate the execution of the data processing jobsJobs Scheduler
ODBC, JDBC APIs to query the datasets in the data lake using SQL-like scriptsSQL Query Engine
Data Ingestion
Data Store
(Distributed File
System)
Big Data
Processing
APIs for retrieving the datasets in the data lake - In some cases, for ingesting dataPublic APIs
| © Copyright 2015 Hitachi Consulting11
Data Lake
Technologies
Data Processing
Jobs Scheduler
Data Ingestion
Data Store
(Distributed File
System)
HDFS, GFS, Azure Blob Storage, Azure Data Lake, S3, etc.
| © Copyright 2015 Hitachi Consulting12
Data Lake & Enterprise Data Platform
Microsoft Patterns & Practises
1 – ETL Level Integration
2 – DW Level Integration
3 – BI Level Integration
 Corporate Data Model
 Reports/Dashboard (Mashup)
| © Copyright 2015 Hitachi Consulting13
Data Ingestion – Populating the
Data Lake
| © Copyright 2015 Hitachi Consulting14
Data Ingestion
 Refers to ingesting large amount of data into the data lake
in low-frequency (hourly, daily, monthly, etc.)
 Data sources
 RDBMS
 NoSQL data stores
 File systems
 Blob storages
 Data is extracted from the source using time window filter
(from-to), or based on existing files in the source location.
 If data file is pushed to data lake using ingestion APIs, they
are received in an internal queue before they are
consumed and stored in data lake
 Runs on schedule
Batch Data Ingestion
Data
Ingestions
Data lake
Extract Transform store
Data
Ingestions
Data lake
Extract Transform store
Data
Ingestions
Data lake
Push Queue Transform
&
store
Upload API
| © Copyright 2015 Hitachi Consulting15
Data Ingestion
 Refers to ingesting small data messages into the data lake in high-
frequency (near-real time)
 In some scenarios, data message are pushed into a message queue, where
the data ingestion tool consumes these messages from the queue and store
them in the data lake.
 In other scenarios, the data generator calls an ingestion API to push the
data to the data lake, where the data message is received initially in an
internal queue before it is transformed and stored in the lake
 If the data message is small, it is preferable to consume a “window” of
messages at a time before the are stored (as one file) in the data lake to
avoid having many small files, one per each message in the data lake.
 One idea is to store each message in a intermediate NoSQL store, in real
time, where a batch data ingestion retrieve all the data inserted at once to
store them in one file in the data lake
Real-time Data Ingestion
Data
Ingestions
Data
lake
Enqueue
Consume
a data window
Transform
&
store
Data
Ingestions
Data
lake
Consume
a data message
&
store in temp NoSQL store Batch Extract the data,
Transform &
store
Enqueue
{
| © Copyright 2015 Hitachi Consulting16
Data Ingestion
Transformation
Data ingestion tools are not meant to perform data processing, rather some
data transformation tasks are applied during ingesting data to data lake:
 File format change (decode/encode) – optimize the data file for read/write
 File compression before store – optimize storage and files shuffling
 Merging multiple file (or data messages) from source into one file to be
stored (to avoid having many small files)
 Adding metadata columns to the dataset in-flight (enrichment)
 Mutate: Removing, renaming columns, etc.
In some cases, if the source data is a file system, binary (blind) file ingestion
can be performed (without parsing the file content) if no transformations or
encoding are needed – maybe just compression
Parse (decode)
Merge
Mutate + Enrich
Encode
Compress
Data lake
| © Copyright 2015 Hitachi Consulting17
Data Ingestion
Data Partitioning
 The incoming data is partitioned and the files are stored in different
(hierarchical) directories in the file system
 Partitioning is typically based on date/time (year/month/day/…) and/or any
data element used in the “where” clause (country, site, category, etc.)
 The reason is achieve partitions elimination while processing data (if the
partition is part of the where/filter criteria of the processing)
 Improves the performance of the query/processing jobs
as less data files are considered.
| © Copyright 2015 Hitachi Consulting18
Data Ingestion
File Formats & Compression
Raw Files
 Raw data files maybe stored in its original format (TEXT/XML/JSON)
 Original format allow users to download the original files and work with them
 Compression is applied to improve query/processing performance
 For very large file, splittable compression is applied (e.g. bzip2, LZO) – usually slow to compress
 For normal files, high speed (less efficient) compression can be applied (e.g. gzip, deflate)
 Splitability is important as multiple processing task can work on different parts of the file simultaneously
Processes Data Files
 All the processed data files (dataset) should follow the same format
 Since the processed datasets are query intensive (used to perform aggregations, etc.), Columnar-oriented
formats are recommended (for better read performance) – e.g. Parquet, ORC
 For supporting schema evolution (e.g. expected changes in schema), formats Parquet and Avro are
recommended.
 Since Columnar-oriented format are binary and compressed, no compression, or snappy (fast, less effecient)
compression can be applied
| © Copyright 2015 Hitachi Consulting19
Data Ingestion
File Formats
Format Description Content Splitable
Schema
Evolution
Size
Read
Perfom.
TEXTFILE  Delimited text file
 Any Platform
Plain Text Yes No 6 6
SEQUENCEFILE  Binary key-value pairs
 Optimized for MapReduce
Plain Text Yes No 5 4
PARQUET  Columnar storage format
 Optimized for querying “wide” tables
Binary
(Compressed)
No Yes 2 2
RCFile  Columnar storage format Binary
(Compressed)
No No 3 3
ORC  Columnar storage format
 Optimized for distinct selection and
aggregations
 Vectorization: Batch Processing. Each
batch is a column vector.
Binary (Highly
Compressed)
No No 1 1
AVRO  serialization system with evolvable
schema-driven binary data
 Cross-platform inter-operability.
Binary Yes Yes 4 5
Note that, if Text and Avro files are compressed, they perform pretty well in terms of query performance
| © Copyright 2015 Hitachi Consulting20
Data Ingestion
Compression
Compression Extension Codec Splittable Efficacy
(Compression Ratio)
Speed
(to compress)
DEFLATE .deflate org.apache.hadoop.io.compress.DefaultCodec No Medium Medium
gzip .gz org.apache.hadoop.io.compress.GzipCodec No Medium Medium
bzip2 .bz2 org.apache.hadoop.io.compress.BZip2Codec Yes High Low
LZO .lzo com.hadoop.compression.lzo.LzopCodec Yes* Low High
LZ4 .lz4 org.apache.hadoop.io.compress.Lz4Codec No Low High
Snappy .snappy org.apache.hadoop.io.compress.SnappyCodec No Low High
Reduces space needed to store files
Speeds up data transfer across the network or to or from disk
| © Copyright 2015 Hitachi Consulting21
Azure Data Lake Store
| © Copyright 2015 Hitachi Consulting22
Big Data
Processing
 Hive/Pig
 U-SQL
 Spark
 Storm
Real-time Stream
Processing
 Stream Analytics
 Event Hubs
 Storm
 Spark Streaming
Data Science & Machine
Learning
 Azure Machine Learning
 Microsoft R Server
 Spark ML
 Azure Cognitive Services
NoSQL Data
Stores
 Table Storage
 HBase
 DocumentDB
Aggregate/
Calculate
Data Discovery &
Search
 Azure Data
Catalog
 Azure
Search
Filter,
Reform,
Transpose
Extract, Load
Data Orchestration & Movement
 Azure Data Factory
High Performance Computing
 Azure Batch
| © Copyright 2015 Hitachi Consulting23
Azure Data Lake Store
Azure Data Lake Services
Data Lake Store

Built for data files, rather than Blobs
 WebHDFS REST APIs for Hadoop integration
 Optimized for distributed analytical workloads
 Unlimited Storage, petabyte files
 Security: AAD an ACL, Encryption
 Built-in Catalog for “virtual” databases and .net assemblies
Data Lake Analytics
 Big Data Jobs as a Service
 Built on Appache YARN
 U-SQL
 Unit of Cost: running Job
HDInsight
 Hadoop Cluster as a Service
 MapReduce | Pig | Hive | Spark
HBase | Oozie | Storm
 Unit of Cost: Running nodes
| © Copyright 2015 Hitachi Consulting24
Azure Data Lake Store
Azure Portal
adl://<data_lake_store_name>.azuredatalakestore.net
| © Copyright 2015 Hitachi Consulting25
Azure Data Lake Store
Ingest Data
Data Lake Store
Azure Stream Analytics
 Real-time
 Event-based
 From Events Hub/IoT Hub
Azure HDInsight
Azure Data Factory
 Batch
 Scheduled
 From SQL/NoSQL/File Systems
 Sqoop: From SQL
 Distcp: From Blob/HDFS
 Oozie is used for Scheduling
Interactively
 Azure Portal
 Azure PowerShell
 Azure Cross-platform CLI
 Data Lake Tools for Visual Studio
 AdlCopy Tool: From Blob
Programmatically
 .Net SDK
 Java SDK
 Node.js SDK
 REST APIs
| © Copyright 2015 Hitachi Consulting26
Azure Data Lake Store
Secure Data
Data Lake Store
Authentication
 Azure Active Directory (ADD)
 Users and service identities defined in your ADD can access your Data Lake Store account,
 Key advantages of using Azure Active Directory as a centralized access control mechanism are:
 Simplified identity lifecycle management.
 Authentication from any client through a standard open protocol, such as OAuth or OpenID.
 Federation with enterprise directory services and cloud identity provide
Network Isolation
 Firewall with IP address
range for your trusted clients
Authorization
 Role-based access control (Owner/Reader/Contributor/Administrator)
 Access Control list (Read/Write/Execute)
 Files and folders both have Access ACLs
Data Protection
 In transient - Transport Layer Security (TLS) protocol to secure data over the network.
 At rest - provides transparent encryption for data that is stored in the account.
 Data Lake Store automatically encrypts data prior to persisting and decrypts data prior to retrieval,
 Completely transparent to the client accessing the data
| © Copyright 2015 Hitachi Consulting27
Azure Data Lake Store
.Net SDK
| © Copyright 2015 Hitachi Consulting28
Azure Data Lake Store
Data Lake vs Blob Storage
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-comparison-with-blob-storage
Azure Data Lake Store Azure Blob Storage
Purpose
Optimized performance for parallel analytics workloads. High
Throughput and IOPS.
General purpose object store for a wide variety of storage scenarios
Use Cases
Batch, interactive, streaming analytics and machine learning
data such as log files, IoT data, click streams, large datasets
Any type of text or binary data, such as application back end, backup data,
media storage for streaming and general purpose data
Key Concepts
Data Lake Store account contains folders, which in turn
contains data stored as files
Storage account has containers, which in turn has data in the form of blobs
Structure Hierarchical file system Object store with flat namespace
HDFS Client Yes Yes
Server-side API WebHDFS-compatible REST API Azure Blob Storage REST API
Security - Authentication
Based on Azure Active Directory Identities Based on shared secrets - Account Access Keys and Shared Access
Signature Keys.
Security - Authorization
POSIX Access Control Lists (ACLs). ACLs based on Azure
Active Directory Identities can be set file and folder level.
For account-level authorization – Use Account Access Keys
For account, container, or blob authorization - Use Shared Access
Signature Keys
Encryption data at rest Transparent, Server side Transparent, Server side
Developer SDKs .NET, Java, Python, Node.js .Net, Java, Python, Node.js, C++, Ruby
Size limits No limits on account sizes, file sizes or number of files Specific limits documented here
Redundancy
Locally-redundant (multiple copies of data in one Azure region) Locally redundant (LRS), globally redundant (GRS), read-access globally
redundant (RA-GRS). See here for more information
| © Copyright 2015 Hitachi Consulting29
Azure Data Factory
| © Copyright 2015 Hitachi Consulting30
Azure Data Factory
Cloud data processing & movement services
A managed Azure cloud service for
building & operating data pipelines
Scalable
Reliable
Pay-as-you use
Batch Data Ingestion &
Movement
Data Processing
Orchestration and
Scheduling
No real-time data
collection (use
stream Analytics
instead)
Not a data processing
engine, but uses different
compute platforms
Format conversion,
compression, files
merge, column
mapping
| © Copyright 2015 Hitachi Consulting31
Azure Data Factory
Concepts & Components
Data Factory
Linked Service
A connection information to a data store
or a compute resource
Dataset
A pointer to the data object to be used as
input or an output of an Activity
Pipeline
logical grouping of Activities with certain
sequence & schedule.
| © Copyright 2015 Hitachi Consulting32
Azure Data Factory
Concepts & Components
Data Factory
Pipeline
Linked Service
A connection information to a data store
or a compute resource
Dataset
A pointer to the data object to be used as
input or an output of an Activity
Pipeline
logical grouping of Activities with certain
sequence & schedule.
Activity
actions to perform on data.
Input 0-N dataset(s) - Output 1-N dataset(s)
| © Copyright 2015 Hitachi Consulting33
Azure Data Factory
Concepts & Components
Data Factory
Pipeline
Linked Service
A connection information to a data store
or a compute resource
Dataset
A pointer to the data object to be used as
input or an output of an Activity
Pipeline
logical grouping of Activities with certain
sequence & schedule.
Activity
actions to perform on data.
Input 0-N dataset(s) - Output 1-N dataset(s)
Linked
Service Dataset
Linked
Service
Dataset
| © Copyright 2015 Hitachi Consulting34
Azure Data Factory
Concepts & Components
Data Factory
Linked Service
A connection information to a data store
or a compute resource
Dataset
A pointer to the data object to be used as
input or an output of an Activity
Pipeline
logical grouping of Activities with certain
sequence & schedule.
PipelineDataset
Pipeline
Dataset
Dataset
Dataset
Dataset
| © Copyright 2015 Hitachi Consulting35
Azure Data Factory
Concepts & Components
| © Copyright 2015 Hitachi Consulting36
Azure Data Factory
Features
Linked Service - Compute
 Azure HDInsight On-Demand
Azure Batch
 Azure Machine Learning
 Azure Data Lake Analytics
 Azure SQL DW (StoredProc)
 Azure SQL DB (StoredProc)
 SQL Server (StoredProc)
Activities
 Data Movement
(copy Data from Source to sink)
 Data Transformation
 (on-demand) HDInsight
(Hive | Pig | MapReduce | Spark)
 Azure ML Batch
 Stored Procedure
 Data Lake U-SQL Jobs
 Custom .NET
(Azure Batch)
Linked Service – Data
 Azure Blob
 Azure Table
 Azure SQL Database
 Azure SQL Data Warehouse
 Azure DocumentDB
 Azure Data Lake Store
 SQL Server
 ODBC data sources
 Hadoop Distributed File
System (HDFS)
Data Gateway - Connect to an on-prem data source
| © Copyright 2015 Hitachi Consulting37
Azure Data Factory - Taxonomy
Dataset
| © Copyright 2015 Hitachi Consulting38
Properties
Azure Data Factory - Taxonomy
Dataset
name
Dataset
Type (AzureBlob, AzureTable, AzureDLS, AzureSQLDB, AzureSQLDW, etc.)
Type Properties
Linked
Service
availability
structure
policy
frequency
intervals
offset
external
| © Copyright 2015 Hitachi Consulting39
Properties
Azure Data Factory - Taxonomy
Dataset
name
Dataset
Type (AzureBlob, AzureTable, AzureDLS, AzureSQLDB, AzureSQLDW, etc.)
Type Properties
Linked
Service
availability
structure
policy
external
Type Properties type
Table name
Folder Path
File Name
File Format
Compression
Partition By
SQL
File/Blob
DocumentDB
Collection
name
| © Copyright 2015 Hitachi Consulting40
Properties
Azure Data Factory - Taxonomy
Dataset
name
Dataset
Type (AzureBlob, AzureTable, AzureDLS, AzureSQLDB, AzureSQLDW, etc.)
Type Properties
Linked
Service
availability
structure
external
policy type
MinimumRows
MinimumSizeMBFile/Blob
SQL
| © Copyright 2015 Hitachi Consulting41
Azure Data Factory - Taxonomy
Pipeline
| © Copyright 2015 Hitachi Consulting42
Azure Data Factory - Taxonomy
Pipeline
Properties
name
pipeline
Activities [ ]
Is Paused
End
Start
Pipeline Mode
Expiration Time
One time
Scheduled
| © Copyright 2015 Hitachi Consulting43
Azure Data Factory - Taxonomy
Pipeline - Activity
Properties
name
pipeline
Activities
name
Type < Copy | HDInsigh… | SqlServerStoredProcedure |
DataLakeAnalyticsU-SQL | MyDotNetActivity | AzureMLBatchExecution
Linked Service
Outputs [ <datasets>]
Inputs [ <datasets>]
policy timeout
retry
Type properties
Execution
Priority Order
| © Copyright 2015 Hitachi Consulting44
Azure Data Factory - Taxonomy
Data Movement Activity
Type Properties
Type:
Copy
Activity
source
sink
Translator
(column Mapping)
Type
properties
Type
properties
| © Copyright 2015 Hitachi Consulting45
Azure Data Factory - Taxonomy
Type Properties
• sqlReaderQuery: "$$Text.Format('select * from
[Table] where [DataDate] >= {0:yyyyMMdd} AND
[DataDate] < {1:yyyyMMdd}', WindowStart,
WindowEnd)“
OR
• sqlReaderStoredProcedureName
• sqlReaderStoredProcedureParameters
Type: SqlSource
Source
Type Properties
• sqlWriteCleanupScript (optional)
• sqlWriteStoredProcedure (takes the input data
as a table-valued parameter)
• sqlWriteStoredProcedureParameters - not
supported in AzureSQLDW
• sqlWriteTableType (input data table tpye - pre-
defined in sql) - not supported in AzureSQLDW
• sliceDateTimeColumnIdentifier (binary) - not
supported in AzureSQLDW
• writeBatchSize
Type: SqlSink
Sink
Data Movement Activity – SQL & Blob
Type Properties
recursive: true:false
Type:
BlobSource
Source
Type Properties
copyBehaviour
(preserveHierarchy|FlattenHeirarchy|Merge)
Type: BlobSink
Sink
| © Copyright 2015 Hitachi Consulting46
Azure Data Factory - Taxonomy
Copy Performance Tuning Elements
concurrency
parallelCopies
cloudMovementUnits
EnableStaging
activity
runs multiple data slices concurrently – data migration
multiple threads, depending on source/sink
Useful when moving data from on-prem to cloud
(1,2,4,8) - CPU/Memory allocation
| © Copyright 2015 Hitachi Consulting47
Data Lake Ingestion Scenarios
with Azure Data Factory
| © Copyright 2015 Hitachi Consulting48
Azure Data Lake Ingestion Scenarios

Blob Storage
SqlDB
Events Hub
Stream Analytics
Table Storage
Custom Event Consumer
(Azure Function)
Consume messages
Insert each
event as a new
record
Consume a set of events
using Tumbling Window
Extract records Where
<ModifiedDate> between
WindowStart and WindowEnd
Preserve source folder/files hierarchy,
Flatten Hierarchy (in the file name),
or Merge all source files in one file in the destination
Azure Data Lake
Get Existing files in specific
source location
Where some name format
| © Copyright 2015 Hitachi Consulting49
Azure Data Lake Ingestion Scenarios
Data Factory - ADLS Dataset Definition
{
"name": "output-azdlsfolder-contoso_customer",
"properties": {
"published": false,
"type": "AzureDataLakeStore",
"linkedServiceName": "azdls_kspocs",
"typeProperties": {
"folderPath": "contoso/customers/{Year}/{Month}/{Day}",
"format": {
"type": "TextFormat", "rowDelimiter": "n",
"columnDelimiter": "|", "nullValue": "NA",
"quoteChar": """, "firstRowAsHeader": true
},
"partitionedBy": [
{"name": "Year", "value": {"type": "DateTime", "date": "SliceStart","format": "yyyy"}},
{"name": "Month", value": {"type": "DateTime", "date": "SliceStart","format": "MM"}},
{"name": "Day", "value": {"type": "DateTime", "date": "SliceStart","format": "dd"}}
],
"compression": {"type": "GZip","level": "Optimal"}
},
"availability": {
"frequency": "Day",
"interval": 1
},
"external": false
}
}
Define Destination File
Format, Compression, and
Partitioning - in the DL Store
| © Copyright 2015 Hitachi Consulting50
Azure Data Lake Ingestion Scenarios
Data Factory – Copy Activity (SqlSource)
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "SqlSource",
"sqlReaderQuery": "$$Text.Format('select * from cso.FactOnlineSales where CAST(CONVERT(VARCHAR(10), DateKey, 112)
AS INT) >= {0:yyyyMMdd} AND CAST(CONVERT(VARCHAR(10), DateKey, 112) AS INT) < {1:yyyyMMdd}', WindowStart, WindowEnd)"
},
"sink": {
"type": "AzureDataLakeStoreSink", "writeBatchSize": 0,"writeBatchTimeout": "00:00:00"
}
},
"inputs": [{"name": "input-azsqldbtable-contoso_sales"}],
"outputs": [{"name": "output-azdlsfolder-contoso_sales"}],
"policy": {
"timeout": "01:00:00",
"concurrency": 1,
"retry": 1
},
"scheduler": {
"frequency": "Day",
"interval": 1
},
"name": "act-copy-sales_SQLDB-DLS"
}
]
Parameterizing the fetch query
to get only the current data slice
(based on the WindowStart &
WindowEnd)
| © Copyright 2015 Hitachi Consulting51
Azure Data Lake Ingestion Scenarios
Data Factory – Copy Activity (BlobSource)
{
"name": "contoso_product-pipeline",
"properties": {
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "BlobSource",
"recursive": true
},
"sink": {
"type": "AzureDataLakeStoreSink",
"copyBehavior": "MergeFiles",
}
},
"inputs": [{"name": "input-azstorage-contoso_product"}],
"outputs": [{"name": "output-azdlsfolder-contoso_product_v3"}],
"policy": {"timeout": "1.00:00:00","concurrency": 1,"retry": 1},
"scheduler": {"frequency": "Day","interval": 1},
"name": "act-copy-customer_AZS->DLS"
}
],
"start": "2017-01-06T08:00:00.0106386Z",
"end": "2017-01-07T08:00:00.0106386Z",
"isPaused": false,
"pipelineMode": "Scheduled"
}
}
Merges all the fetched files data into one file
to be stored in the destination – Other
Options: PerserveHierarchy & Flatten
Fetches all the files with in the input dataset
specified path (folder) and sub folders
| © Copyright 2015 Hitachi Consulting52
Azure Data Lake Ingestion Scenarios
Stream Analytics – From Events Hub
| © Copyright 2015 Hitachi Consulting53
Azure Data Lake Analytics
| © Copyright 2015 Hitachi Consulting54
Azure Data Lake Analytics
Big Data Processing Jobs
Big Data Processing Jobs as Service
U-SQL: simple and familiar,
powerful, and extensible
Affordable and cost
effective: Pay as you use
Dynamic
scaling
Integrates seamlessly with
your IT investments
Deploy on Azure & schedule
using Data Factory
Develop faster, debug,
and optimize smarter
using familiar tool
| © Copyright 2015 Hitachi Consulting55
Azure Data Lake Analytics
U-SQL
Natively Parallel
Extensible (User-defined Functions, Operators and
Aggregate Functions)
C# meets SQL
(.NET data type, C# expressions + SQL set statements)
Set-based and procedural processing
Works with structured and unstructured data
Reuse of custom build .NET assemblies
| © Copyright 2015 Hitachi Consulting56
Azure Data Lake Analytics
U-SQL – Processing Model
 Create a Managed Data Model (database, schemas, tables, functions)
 Only a schema on top of existing files
 Allow querying data files using SQL on top with tables/views
 Reuse of procs and functions
 Retrieve data:
 From data lake files using EXTRACT – apply schema on read
 From Managed table using SELECT
 From external data sources (Azure SQL DB/DW, SQLServer)
 Data is retrieved as @rowset
 A rowset is processed (aggregated, joined with other rowset, etc.) using SQL and C# to
produce another rowset
 A rowset is stored:
 As data lake file using OUTPUT
 In managed table using INSERT (apply schema on write) or CREAT TABLE AS SELECT
| © Copyright 2015 Hitachi Consulting57
Azure Data Lake Analytics
U-SQL – Processing Model
Create Managed Data Model
Database, Schemas
Tables, Index, Views
Functions, Procs
Data Sources
External Tables
Azure
SQLDB
Azure
SQLDW SQLSvr
INSERT/
CTAS*
EXTRACT
Data Lake file/
directory
@rowset
@rowset
Transform, join, explode,
aggregate, process, …
@rowset
OUTPUT
Data Lake file/
directory
SELECT
*CTAS: CREATE TABLE AS SELECT
| © Copyright 2015 Hitachi Consulting58
Azure Data Lake Analytics
U-SQL – Script Structure
 Use statement: USE Database
 Declare Variables: DECLARE @input string = <File Location>
 Register Assembly: CREATE ASSEMBLY <myDotNetLib> FROM ”<dll location>
 Reference Assembly: REFERENCE ASSEMBLY <myDotNetLib>
 DLL Statement: CREATE TABLE | VIEW |…
Create Table: Index, Partition, Distribution
 Query statement: @rowset = SELECT … FROM <@set>|<table>
 Extract statement: @rowset = EXTRACT <columns> FROM <file location> USING <Extractor>
 DML Statement: INSERT INTO <Table> SELECT … FROM @rowset
 CTAS: CREATE TABLE <Table> AS SELECT … FROM @rowset
 Output Statement: OUTPUT @rowset TO <file location> USING <Outputer>
| © Copyright 2015 Hitachi Consulting59
Azure Data Lake Analytics
Dev and Test U-SQL locally (using Visual Studio Data Lake Analytics Tools)
| © Copyright 2015 Hitachi Consulting60
Azure Data Lake Analytics
U-SQL Example – data aggregation
USE DATABASE master;
CREATE DATABASE IF NOT EXISTS contoso;
USE DATABASE contoso;
CREATE SCHEMA IF NOT EXISTS mrt;
DROP TABLE IF EXISTS mrt.sales;
CREATE TABLE mrt.sales
(
Year int,
Month int,
Country string,
ProductSubCategory string,
TotalSales double,
INDEX index_sales CLUSTERED(ProductSubCategory ASC)
PARTITIONED BY (Year,Month)
DISTRIBUTED BY HASH (Country)
);
DECLARE @datafolder string = "sample-data/data-files/{*).txt";
@data =
EXTRACT
identifier string,
value double
FROM @datafolder
USING Extractors.Csv();
@result =
SELECT
identifier,
DateTime.Now AS timestamp,
SUM(value) AS total
FROM @data
GROUP BY
identifier,
DateTime.Now;
USE DATABASE contoso;
DROP TABLE IF EXISTS dbo.test;
CREATE TABLE dbo.test
(
INDEX [idx_test] CLUSTERED (identifier ASC)
PARTITIONED BY HASH (identifier)
)
AS
SELECT
*
FROM @result;
Create Managed Model
Proccess Data
| © Copyright 2015 Hitachi Consulting61
Azure Data Lake Analytics
U-SQL Example – data aggregation
| © Copyright 2015 Hitachi Consulting62
Azure Data Lake Analytics
U-SQL – Example Word Count
DECLARE @input_file string = "sample-data/text-input.txt";
DECLARE @out_file string = "sample-data/text-output.txt";
@raw =
EXTRACT line string
FROM @input_file
USING Extractors.Csv();
@arrays =
SELECT new SQL.ARRAY<string>(line.Split(' ').Where(word=>word.Length>1)) AS
words
FROM @raw;
@map =
SELECT 1 AS count,
tmp.word.ToString().ToUpper() AS word
FROM @arrays AS arrays
CROSS APPLY
EXPLODE(arrays.words) AS tmp(word);
@reduce =
SELECT map.word,
SUM(map.count) AS [Count]
FROM @map AS map
GROUP BY map.word;
OUTPUT @reduce
TO @out_file
ORDER BY [Count] DESC
USING Outputters.Tsv();
| © Copyright 2015 Hitachi Consulting63
U-SQL .net Extensibility
| © Copyright 2015 Hitachi Consulting64
Azure Data Lake Analytics
U-SQL – .net extensibility
 Code-behind (script.usql.cs) – Write Static Methods and use in the u-sql script
 Assemblies
 Create dlls with processing logic
 Upload dlls to dls
 Register assemblies in a data lake managed database
 Reference assembly in u-sql script and use code
Implement custom:
 Extractors
 Outputters
 Aggregators
 Appliers
 Combiners
 Processor
| © Copyright 2015 Hitachi Consulting65
Azure Data Lake Analytics
U-SQL – .net extensibility | terms frequency example
Managed Model
USE DATABASE master;
CREATE DATABASE IF NOT EXISTS demo;
USE DATABASE demo;
CREATE SCHEMA IF NOT EXISTS txt;
DROP TABLE IF EXISTS txt.termstats;
CREATE TABLE txt.termstats
(
Term string,
Frequency long?,
AverageFrequency double,
VarianceFromAverage double,
Timestamp DateTime,
Year int,
Month int,
Day int,
INDEX index_termstats CLUSTERED(Term ASC)
PARTITIONED BY (Year,Month,Day)
DISTRIBUTED BY HASH (Term)
);
| © Copyright 2015 Hitachi Consulting66
Azure Data Lake Analytics
U-SQL – .net extensibility | terms frequency example
Implement Full-line Extractor
| © Copyright 2015 Hitachi Consulting67
Azure Data Lake Analytics
U-SQL – .net extensibility | terms frequency example
Registering Assembly
register-assemblies.usql
Reference, and use the Assembly process-termstats.usql
| © Copyright 2015 Hitachi Consulting68
Azure Data Lake Analytics
U-SQL – .net extensibility | terms frequency example
Code-behind
| © Copyright 2015 Hitachi Consulting69
Azure Data Lake Analytics
U-SQL – .net extensibility | term frequency example
DECLARE @input_file string = "demo-data/doc1.txt";
USE DATABASE demo;
REFERENCE ASSEMBLY DLAServices;
USING MyServices = DLAnalytics.Services;
@raw =
EXTRACT line string
FROM @input_file
USING new MyServices.FullLineExtractor();
@arrays =
SELECT new
SQL.ARRAY<string>(DLAnalytics.Demo.Tokenizer.Tokenize(2,line))
AS terms FROM @raw;
@map =
SELECT 1 AS frequency, tmp.term.ToUpper() AS term
FROM @arrays AS arrays
CROSS APPLY
EXPLODE(arrays.terms) AS tmp(term);
@reduce =
SELECT map.term,
SUM(map.frequency) AS frequency
FROM @map AS map
GROUP BY map.term
HAVING SUM(map.frequency) >= 3;
@aggregates =
SELECT Convert.ToDouble(AVG(frequency)) AS averageFrequency
FROM @reduce;
@output =
SELECT term,
frequency,
averageFrequency,
Convert.ToDouble(frequency - averageFrequency) AS
varianceFromAverage,
DateTime.Now AS timestamp
FROM @reduce CROSS JOIN @aggregates;
DECLARE @Year int = DateTime.Now.Year;
DECLARE @Month int = DateTime.Now.Month;
DECLARE @Day int = DateTime.Now.Day;
ALTER TABLE txt.termstats
ADD IF NOT EXISTS PARTITION(@Year, @Month, @Day);
INSERT INTO txt.termstats
(Term, Frequency, AverageFrequency, VarianceFromAverage, Timestam )
PARTITION
( @Year,
@Month,
@Day
)
SELECT *
FROM @output;
process-termstats.usql
| © Copyright 2015 Hitachi Consulting70
Running U-SQL Jobs
on Azure
| © Copyright 2015 Hitachi Consulting71
Azure U-SQL Jobs
Simplified Workflow
Job Front End
Job Scheduler Compiler Service
Job Queue
Job Manager
U-SQL Catalog
YARN
Job submission
Job execution
U-SQL Runtime Vertex execution
Metadata
Resource Manager
• Max Concurrent Jobs = 50
• Max AUs per job = 120
• Max Queue Length = 200
 Parallelism = N Analytics Units (AU)s
 1 AU ~= 2 cores and 6 GB of memory
https://blogs.msdn.microsoft.com/azured
atalake/2016/10/12/azure-data-lake-
analytics-more-compute-and-more-
control
| © Copyright 2015 Hitachi Consulting72
Azure U-SQL Jobs
Monitoring Jobs running on Azure
| © Copyright 2015 Hitachi Consulting73
Azure U-SQL Jobs
Job Status
Preparing
Queued
Running
Finalizing
Ended
(Succeeded, Failed, Cancelled)
New
Compiling
Queued
Scheduling
Starting
Running
Ended
The script is being compiled by the Compiler Service
All jobs enter the queue.
Are there enough ADLAUs to start the job?
If yes, then allocate those ADLAUs for the job
The U-SQL runtime is now executing the code on 1
or more ADLAUs or finalizing the outputs
The job has concluded.
| © Copyright 2015 Hitachi Consulting74
Azure U-SQL Jobs
Monitoring Jobs running on Azure Job files: system/jobservice/jobs/yyyy/MM/dd/jobid/..
| © Copyright 2015 Hitachi Consulting75
Azure U-SQL Jobs
Monitoring Jobs running on Azure
| © Copyright 2015 Hitachi Consulting76
Azure U-SQL Jobs
Running U-SQL Jobs Using Azure Data Factory
{
"name": "pipeline-dla-test",
"properties": {
"activities":
[
{
"type": "DataLakeAnalyticsU-SQL",
"typeProperties": {
"scriptPath": “pipelinestestprocess-data.usql",
"scriptLinkedService": "StorageLinkedService",
"degreeOfParallelism": 3,
"priority": 100,
"parameters": {
"in": "/datalake/input/data-input.tsv",
"out": "/datalake/output/data-output.tsv"
}
},
Parameters can be
passed to the U-SQL
Script to be executed
| © Copyright 2015 Hitachi Consulting77
My Background
Applying Computational Intelligence in Data Mining
• Honorary Research Fellow, School of Computing , University of Kent.
• Ph.D. Computer Science, University of Kent, Canterbury, UK.
• M.Sc. Computer Science , The American University in Cairo, Egypt.
• 25+ published journal and conference papers, focusing on:
– classification rules induction,
– decision trees construction,
– Bayesian classification modelling,
– data reduction,
– instance-based learning,
– evolving neural networks, and
– data clustering
• Journals: Swarm Intelligence, Swarm & Evolutionary Computation,
, Applied Soft Computing, and Memetic Computing.
• Conferences: ANTS, IEEE CEC, IEEE SIS, EvoBio,
ECTA, IEEE WCCI and INNS-BigData.
ResearchGate.org
| © Copyright 2015 Hitachi Consulting78
Thank you!

Weitere ähnliche Inhalte

Was ist angesagt?

Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 
Enabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data VirtualizationEnabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data Virtualization
Denodo
 

Was ist angesagt? (20)

Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
 
Enabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data VirtualizationEnabling a Data Mesh Architecture with Data Virtualization
Enabling a Data Mesh Architecture with Data Virtualization
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r1)
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics Primer
 
Webinar Data Mesh - Part 3
Webinar Data Mesh - Part 3Webinar Data Mesh - Part 3
Webinar Data Mesh - Part 3
 
Time to Talk about Data Mesh
Time to Talk about Data MeshTime to Talk about Data Mesh
Time to Talk about Data Mesh
 
Modern Data Warehousing with the Microsoft Analytics Platform System
Modern Data Warehousing with the Microsoft Analytics Platform SystemModern Data Warehousing with the Microsoft Analytics Platform System
Modern Data Warehousing with the Microsoft Analytics Platform System
 
Modern Data architecture Design
Modern Data architecture DesignModern Data architecture Design
Modern Data architecture Design
 
Data Warehouse or Data Lake, Which Do I Choose?
Data Warehouse or Data Lake, Which Do I Choose?Data Warehouse or Data Lake, Which Do I Choose?
Data Warehouse or Data Lake, Which Do I Choose?
 
Databricks: A Tool That Empowers You To Do More With Data
Databricks: A Tool That Empowers You To Do More With DataDatabricks: A Tool That Empowers You To Do More With Data
Databricks: A Tool That Empowers You To Do More With Data
 
Data platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptxData platform modernization with Databricks.pptx
Data platform modernization with Databricks.pptx
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureArchitect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh Architecture
 
Speeding Time to Insight with a Modern ELT Approach
Speeding Time to Insight with a Modern ELT ApproachSpeeding Time to Insight with a Modern ELT Approach
Speeding Time to Insight with a Modern ELT Approach
 
Databricks Platform.pptx
Databricks Platform.pptxDatabricks Platform.pptx
Databricks Platform.pptx
 
Building a Logical Data Fabric using Data Virtualization (ASEAN)
Building a Logical Data Fabric using Data Virtualization (ASEAN)Building a Logical Data Fabric using Data Virtualization (ASEAN)
Building a Logical Data Fabric using Data Virtualization (ASEAN)
 
Data Mesh for Dinner
Data Mesh for DinnerData Mesh for Dinner
Data Mesh for Dinner
 
Building Modern Data Platform with Microsoft Azure
Building Modern Data Platform with Microsoft AzureBuilding Modern Data Platform with Microsoft Azure
Building Modern Data Platform with Microsoft Azure
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Databricks Fundamentals
Databricks FundamentalsDatabricks Fundamentals
Databricks Fundamentals
 
Databricks Delta Lake and Its Benefits
Databricks Delta Lake and Its BenefitsDatabricks Delta Lake and Its Benefits
Databricks Delta Lake and Its Benefits
 

Andere mochten auch

(BDT317) Building A Data Lake On AWS
(BDT317) Building A Data Lake On AWS(BDT317) Building A Data Lake On AWS
(BDT317) Building A Data Lake On AWS
Amazon Web Services
 

Andere mochten auch (20)

Microservices, DevOps, and Continuous Delivery
Microservices, DevOps, and Continuous DeliveryMicroservices, DevOps, and Continuous Delivery
Microservices, DevOps, and Continuous Delivery
 
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...
Modern Data Architecture for a Data Lake with Informatica and Hortonworks Dat...
 
Building the Enterprise Data Lake - Important Considerations Before You Jump In
Building the Enterprise Data Lake - Important Considerations Before You Jump InBuilding the Enterprise Data Lake - Important Considerations Before You Jump In
Building the Enterprise Data Lake - Important Considerations Before You Jump In
 
Implementing a Data Lake with Enterprise Grade Data Governance
Implementing a Data Lake with Enterprise Grade Data GovernanceImplementing a Data Lake with Enterprise Grade Data Governance
Implementing a Data Lake with Enterprise Grade Data Governance
 
Azure data factory
Azure data factoryAzure data factory
Azure data factory
 
The Emerging Data Lake IT Strategy
The Emerging Data Lake IT StrategyThe Emerging Data Lake IT Strategy
The Emerging Data Lake IT Strategy
 
Enterprise Cloud Data Platforms - with Microsoft Azure
Enterprise Cloud Data Platforms - with Microsoft AzureEnterprise Cloud Data Platforms - with Microsoft Azure
Enterprise Cloud Data Platforms - with Microsoft Azure
 
Intorducing Big Data and Microsoft Azure
Intorducing Big Data and Microsoft AzureIntorducing Big Data and Microsoft Azure
Intorducing Big Data and Microsoft Azure
 
Choosing technologies for a big data solution in the cloud
Choosing technologies for a big data solution in the cloudChoosing technologies for a big data solution in the cloud
Choosing technologies for a big data solution in the cloud
 
Big Data Analytics in the Cloud with Microsoft Azure
Big Data Analytics in the Cloud with Microsoft AzureBig Data Analytics in the Cloud with Microsoft Azure
Big Data Analytics in the Cloud with Microsoft Azure
 
Introduction to Microsoft’s Hadoop solution (HDInsight)
Introduction to Microsoft’s Hadoop solution (HDInsight)Introduction to Microsoft’s Hadoop solution (HDInsight)
Introduction to Microsoft’s Hadoop solution (HDInsight)
 
Data lake – On Premise VS Cloud
Data lake – On Premise VS CloudData lake – On Premise VS Cloud
Data lake – On Premise VS Cloud
 
A lap around Azure Data Factory
A lap around Azure Data FactoryA lap around Azure Data Factory
A lap around Azure Data Factory
 
Sören Eickhoff, Informatica GmbH, "Informatica Intelligent Data Lake – Self S...
Sören Eickhoff, Informatica GmbH, "Informatica Intelligent Data Lake – Self S...Sören Eickhoff, Informatica GmbH, "Informatica Intelligent Data Lake – Self S...
Sören Eickhoff, Informatica GmbH, "Informatica Intelligent Data Lake – Self S...
 
Building a Data Lake - An App Dev's Perspective
Building a Data Lake - An App Dev's PerspectiveBuilding a Data Lake - An App Dev's Perspective
Building a Data Lake - An App Dev's Perspective
 
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)
AWS re:Invent 2016: How to Build a Big Data Analytics Data Lake (LFS303)
 
Real-Time Event & Stream Processing on MS Azure
Real-Time Event & Stream Processing on MS AzureReal-Time Event & Stream Processing on MS Azure
Real-Time Event & Stream Processing on MS Azure
 
Best Practices for Building a Data Lake with Amazon S3 - August 2016 Monthly ...
Best Practices for Building a Data Lake with Amazon S3 - August 2016 Monthly ...Best Practices for Building a Data Lake with Amazon S3 - August 2016 Monthly ...
Best Practices for Building a Data Lake with Amazon S3 - August 2016 Monthly ...
 
(BDT317) Building A Data Lake On AWS
(BDT317) Building A Data Lake On AWS(BDT317) Building A Data Lake On AWS
(BDT317) Building A Data Lake On AWS
 
Big Data in Azure
Big Data in AzureBig Data in Azure
Big Data in Azure
 

Ähnlich wie Building the Data Lake with Azure Data Factory and Data Lake Analytics

Introduction to data vault ilja dmitrijev
Introduction to data vault   ilja dmitrijevIntroduction to data vault   ilja dmitrijev
Introduction to data vault ilja dmitrijev
Ilja Dmitrijevs
 

Ähnlich wie Building the Data Lake with Azure Data Factory and Data Lake Analytics (20)

SharePoint Intelligence Real World Business Workflow With Share Point Designe...
SharePoint Intelligence Real World Business Workflow With Share Point Designe...SharePoint Intelligence Real World Business Workflow With Share Point Designe...
SharePoint Intelligence Real World Business Workflow With Share Point Designe...
 
The Marriage of the Data Lake and the Data Warehouse and Why You Need Both
The Marriage of the Data Lake and the Data Warehouse and Why You Need BothThe Marriage of the Data Lake and the Data Warehouse and Why You Need Both
The Marriage of the Data Lake and the Data Warehouse and Why You Need Both
 
Real world business workflow with SharePoint designer 2013
Real world business workflow with SharePoint designer 2013Real world business workflow with SharePoint designer 2013
Real world business workflow with SharePoint designer 2013
 
Is the traditional data warehouse dead?
Is the traditional data warehouse dead?Is the traditional data warehouse dead?
Is the traditional data warehouse dead?
 
Vikram Andem Big Data Strategy @ IATA Technology Roadmap
Vikram Andem Big Data Strategy @ IATA Technology Roadmap Vikram Andem Big Data Strategy @ IATA Technology Roadmap
Vikram Andem Big Data Strategy @ IATA Technology Roadmap
 
Datalake Architecture
Datalake ArchitectureDatalake Architecture
Datalake Architecture
 
Data Lake Overview
Data Lake OverviewData Lake Overview
Data Lake Overview
 
AWS Data Lakes & Best Practices - GoDgtl
AWS Data Lakes & Best Practices - GoDgtlAWS Data Lakes & Best Practices - GoDgtl
AWS Data Lakes & Best Practices - GoDgtl
 
AWS Data Lakes and Best Practices
AWS Data Lakes and Best PracticesAWS Data Lakes and Best Practices
AWS Data Lakes and Best Practices
 
Sql Bits 2020 - Designing Performant and Scalable Data Lakes using Azure Data...
Sql Bits 2020 - Designing Performant and Scalable Data Lakes using Azure Data...Sql Bits 2020 - Designing Performant and Scalable Data Lakes using Azure Data...
Sql Bits 2020 - Designing Performant and Scalable Data Lakes using Azure Data...
 
Data Warehouse
Data Warehouse Data Warehouse
Data Warehouse
 
Introduction to Data Warehouse
Introduction to Data WarehouseIntroduction to Data Warehouse
Introduction to Data Warehouse
 
Azure Data Factory Introduction.pdf
Azure Data Factory Introduction.pdfAzure Data Factory Introduction.pdf
Azure Data Factory Introduction.pdf
 
So You Want to Build a Data Lake?
So You Want to Build a Data Lake?So You Want to Build a Data Lake?
So You Want to Build a Data Lake?
 
Planning and Optimizing Data Lake Architecture - Milos Milovanovic
 Planning and Optimizing Data Lake Architecture - Milos Milovanovic Planning and Optimizing Data Lake Architecture - Milos Milovanovic
Planning and Optimizing Data Lake Architecture - Milos Milovanovic
 
Planing and optimizing data lake architecture
Planing and optimizing data lake architecturePlaning and optimizing data lake architecture
Planing and optimizing data lake architecture
 
5 Steps for Architecting a Data Lake
5 Steps for Architecting a Data Lake5 Steps for Architecting a Data Lake
5 Steps for Architecting a Data Lake
 
Introduction to data vault ilja dmitrijev
Introduction to data vault   ilja dmitrijevIntroduction to data vault   ilja dmitrijev
Introduction to data vault ilja dmitrijev
 
Tera stream ETL
Tera stream ETLTera stream ETL
Tera stream ETL
 
Big data architecture
Big data architectureBig data architecture
Big data architecture
 

Mehr von Khalid Salama

Mehr von Khalid Salama (9)

Microsoft R - ScaleR Overview
Microsoft R - ScaleR OverviewMicrosoft R - ScaleR Overview
Microsoft R - ScaleR Overview
 
Operational Machine Learning: Using Microsoft Technologies for Applied Data S...
Operational Machine Learning: Using Microsoft Technologies for Applied Data S...Operational Machine Learning: Using Microsoft Technologies for Applied Data S...
Operational Machine Learning: Using Microsoft Technologies for Applied Data S...
 
Graph Analytics
Graph AnalyticsGraph Analytics
Graph Analytics
 
Machine learning with Spark
Machine learning with SparkMachine learning with Spark
Machine learning with Spark
 
Spark with HDInsight
Spark with HDInsightSpark with HDInsight
Spark with HDInsight
 
Microsoft Azure Batch
Microsoft Azure BatchMicrosoft Azure Batch
Microsoft Azure Batch
 
NoSQL with Microsoft Azure
NoSQL with Microsoft AzureNoSQL with Microsoft Azure
NoSQL with Microsoft Azure
 
Hive with HDInsight
Hive with HDInsightHive with HDInsight
Hive with HDInsight
 
Data Mining - The Big Picture!
Data Mining - The Big Picture!Data Mining - The Big Picture!
Data Mining - The Big Picture!
 

Kürzlich hochgeladen

怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
vexqp
 
PLE-statistics document for primary schs
PLE-statistics document for primary schsPLE-statistics document for primary schs
PLE-statistics document for primary schs
cnajjemba
 
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
gajnagarg
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
nirzagarg
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
gajnagarg
 
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
vexqp
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
Health
 
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Bertram Ludäscher
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Klinik kandungan
 
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
vexqp
 

Kürzlich hochgeladen (20)

Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdf
 
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATIONCapstone in Interprofessional Informatic  // IMPACT OF COVID 19 ON EDUCATION
Capstone in Interprofessional Informatic // IMPACT OF COVID 19 ON EDUCATION
 
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
 
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptxThe-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
The-boAt-Story-Navigating-the-Waves-of-Innovation.pptx
 
PLE-statistics document for primary schs
PLE-statistics document for primary schsPLE-statistics document for primary schs
PLE-statistics document for primary schs
 
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In bhavnagar [ 7014168258 ] Call Me For Genuine Models...
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptx
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
Top profile Call Girls In Vadodara [ 7014168258 ] Call Me For Genuine Models ...
 
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
 
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...Reconciling Conflicting Data Curation Actions:  Transparency Through Argument...
Reconciling Conflicting Data Curation Actions: Transparency Through Argument...
 
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
SR-101-01012024-EN.docx  Federal Constitution  of the Swiss ConfederationSR-101-01012024-EN.docx  Federal Constitution  of the Swiss Confederation
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
怎样办理旧金山城市学院毕业证(CCSF毕业证书)成绩单学校原版复制
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
 
Dubai Call Girls Peeing O525547819 Call Girls Dubai
Dubai Call Girls Peeing O525547819 Call Girls DubaiDubai Call Girls Peeing O525547819 Call Girls Dubai
Dubai Call Girls Peeing O525547819 Call Girls Dubai
 

Building the Data Lake with Azure Data Factory and Data Lake Analytics

  • 1. | © Copyright 2015 Hitachi Consulting1 Building the Data Lake with Microsoft Azure Data Factory and Data Lake Analytics Khalid M. Salama, Ph.D. Business Insights & Analytics Hitachi Consulting UK We Make it Happen. Better.
  • 2. | © Copyright 2015 Hitachi Consulting2 Outline  What is a Data Lake?  Data Lake Architecture & Technologies  Data Ingestion – Populating the Data Lake  Azure Data Lake Store Overview  Introducing Azure Data Factory  Data Lake Ingestion Scenarios with Azure Data Factory  Introduction to Azure Data Lake Analytics  Processing Data Using U-SQL + .net Extensibility  Running U-SQL Jobs on Azure
  • 3. | © Copyright 2015 Hitachi Consulting3 Data Lake – The Concept What is that? And Why it is needed? What is a Data lake Why a Data Lake A repository that holds raw data file extracts of all the Enterprise source systems Pentaho (Hitachi Data Systems) co-founder and CTO, James Dixon coined the term ‘Data Lake’
  • 4. | © Copyright 2015 Hitachi Consulting4 If you think of a datamart/DW as a store of bottled water – cleansed, packaged and structured for easy consumption – the Data lake is a large body of water in a more natural state. The contents of the data lake, stream in from many source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.” Data Lake – The Concept What is that? And Why it is needed?
  • 5. | © Copyright 2015 Hitachi Consulting5 Data Lake Overall Architecture Data Store Data CatalogRaw data files Processed datasets Processed datasets
  • 6. | © Copyright 2015 Hitachi Consulting6 Data Lake Overall Architecture Data Ingestions Data Store Data CatalogRaw data files Processed datasets Processed datasets Stream Data Collectors Batch Data Collectors Transformation
  • 7. | © Copyright 2015 Hitachi Consulting7 Data Lake Overall Architecture Data Ingestions Data Store Big Data Processing Data Catalog Transpose, Reform, Filter, Extract, Aggregate Machine Learning Analytics and Discovery Raw data files Processed datasets Processed datasets Stream Data Collectors Batch Data Collectors Transformation
  • 8. | © Copyright 2015 Hitachi Consulting8 Data Lake Overall Architecture Data Ingestions Data Store Big Data Processing Admin Data Catalog Job Scheduler Transpose, Reform, Filter, Extract, Aggregate Machine Learning Analytics and Discovery Raw data files Processed datasets Processed datasets Management Monitoring Security Stream Data Collectors Batch Data Collectors Transformation
  • 9. | © Copyright 2015 Hitachi Consulting9 Data Lake Overall Architecture Data Ingestions Data Store Big Data Processing Admin Data Catalog Job Scheduler PublicInterfaces Transpose, Reform, Filter, Extract, Aggregate Machine Learning Analytics and Discovery Raw data files Processed datasets Processed datasets Management Monitoring Security Stream Data Collectors Batch Data Collectors Transformation SQL Query Engine (ODBC, JDBC)
  • 10. | © Copyright 2015 Hitachi Consulting10 Data Lake Overall Architecture Pull data from data sources using batch connectors E.g. SQL, NoSQL, File Systems, (s)FTP, etc.Batch Data Collectors Consumes data streams in using real-time data connectors. E.g. queues (kafka, rabbitmq, MQTT, eventshub, etc.), WebSockets, http, etc.Stream Data Collectors Performs basic data transformation & enrichment, rather than processing, including format conversion (encoding), compression, merge, spilt, etc. Raw Data Files Data stored in files in its raw form (either in its original format, or a standardized format/compression applied in the transformation stage) – XML, JSON, BSON, Text, Avro, Parquet, ORC, etc. Processed Datasets The results of (big) data processing jobs, which is a set of tabular dataset (files) in a query-optimized format Data Catalog Usually a relational database that includes a catalog for the datasets in the data lake Transformation Big data processing jobs (MapReduce, Hive, Pig, Spark, U-SQL) that process the data and apply advanced analytics, including ML and pattern discoveryData Processing Jobs A scheduler that Orchestrate the execution of the data processing jobsJobs Scheduler ODBC, JDBC APIs to query the datasets in the data lake using SQL-like scriptsSQL Query Engine Data Ingestion Data Store (Distributed File System) Big Data Processing APIs for retrieving the datasets in the data lake - In some cases, for ingesting dataPublic APIs
  • 11. | © Copyright 2015 Hitachi Consulting11 Data Lake Technologies Data Processing Jobs Scheduler Data Ingestion Data Store (Distributed File System) HDFS, GFS, Azure Blob Storage, Azure Data Lake, S3, etc.
  • 12. | © Copyright 2015 Hitachi Consulting12 Data Lake & Enterprise Data Platform Microsoft Patterns & Practises 1 – ETL Level Integration 2 – DW Level Integration 3 – BI Level Integration  Corporate Data Model  Reports/Dashboard (Mashup)
  • 13. | © Copyright 2015 Hitachi Consulting13 Data Ingestion – Populating the Data Lake
  • 14. | © Copyright 2015 Hitachi Consulting14 Data Ingestion  Refers to ingesting large amount of data into the data lake in low-frequency (hourly, daily, monthly, etc.)  Data sources  RDBMS  NoSQL data stores  File systems  Blob storages  Data is extracted from the source using time window filter (from-to), or based on existing files in the source location.  If data file is pushed to data lake using ingestion APIs, they are received in an internal queue before they are consumed and stored in data lake  Runs on schedule Batch Data Ingestion Data Ingestions Data lake Extract Transform store Data Ingestions Data lake Extract Transform store Data Ingestions Data lake Push Queue Transform & store Upload API
  • 15. | © Copyright 2015 Hitachi Consulting15 Data Ingestion  Refers to ingesting small data messages into the data lake in high- frequency (near-real time)  In some scenarios, data message are pushed into a message queue, where the data ingestion tool consumes these messages from the queue and store them in the data lake.  In other scenarios, the data generator calls an ingestion API to push the data to the data lake, where the data message is received initially in an internal queue before it is transformed and stored in the lake  If the data message is small, it is preferable to consume a “window” of messages at a time before the are stored (as one file) in the data lake to avoid having many small files, one per each message in the data lake.  One idea is to store each message in a intermediate NoSQL store, in real time, where a batch data ingestion retrieve all the data inserted at once to store them in one file in the data lake Real-time Data Ingestion Data Ingestions Data lake Enqueue Consume a data window Transform & store Data Ingestions Data lake Consume a data message & store in temp NoSQL store Batch Extract the data, Transform & store Enqueue {
  • 16. | © Copyright 2015 Hitachi Consulting16 Data Ingestion Transformation Data ingestion tools are not meant to perform data processing, rather some data transformation tasks are applied during ingesting data to data lake:  File format change (decode/encode) – optimize the data file for read/write  File compression before store – optimize storage and files shuffling  Merging multiple file (or data messages) from source into one file to be stored (to avoid having many small files)  Adding metadata columns to the dataset in-flight (enrichment)  Mutate: Removing, renaming columns, etc. In some cases, if the source data is a file system, binary (blind) file ingestion can be performed (without parsing the file content) if no transformations or encoding are needed – maybe just compression Parse (decode) Merge Mutate + Enrich Encode Compress Data lake
  • 17. | © Copyright 2015 Hitachi Consulting17 Data Ingestion Data Partitioning  The incoming data is partitioned and the files are stored in different (hierarchical) directories in the file system  Partitioning is typically based on date/time (year/month/day/…) and/or any data element used in the “where” clause (country, site, category, etc.)  The reason is achieve partitions elimination while processing data (if the partition is part of the where/filter criteria of the processing)  Improves the performance of the query/processing jobs as less data files are considered.
  • 18. | © Copyright 2015 Hitachi Consulting18 Data Ingestion File Formats & Compression Raw Files  Raw data files maybe stored in its original format (TEXT/XML/JSON)  Original format allow users to download the original files and work with them  Compression is applied to improve query/processing performance  For very large file, splittable compression is applied (e.g. bzip2, LZO) – usually slow to compress  For normal files, high speed (less efficient) compression can be applied (e.g. gzip, deflate)  Splitability is important as multiple processing task can work on different parts of the file simultaneously Processes Data Files  All the processed data files (dataset) should follow the same format  Since the processed datasets are query intensive (used to perform aggregations, etc.), Columnar-oriented formats are recommended (for better read performance) – e.g. Parquet, ORC  For supporting schema evolution (e.g. expected changes in schema), formats Parquet and Avro are recommended.  Since Columnar-oriented format are binary and compressed, no compression, or snappy (fast, less effecient) compression can be applied
  • 19. | © Copyright 2015 Hitachi Consulting19 Data Ingestion File Formats Format Description Content Splitable Schema Evolution Size Read Perfom. TEXTFILE  Delimited text file  Any Platform Plain Text Yes No 6 6 SEQUENCEFILE  Binary key-value pairs  Optimized for MapReduce Plain Text Yes No 5 4 PARQUET  Columnar storage format  Optimized for querying “wide” tables Binary (Compressed) No Yes 2 2 RCFile  Columnar storage format Binary (Compressed) No No 3 3 ORC  Columnar storage format  Optimized for distinct selection and aggregations  Vectorization: Batch Processing. Each batch is a column vector. Binary (Highly Compressed) No No 1 1 AVRO  serialization system with evolvable schema-driven binary data  Cross-platform inter-operability. Binary Yes Yes 4 5 Note that, if Text and Avro files are compressed, they perform pretty well in terms of query performance
  • 20. | © Copyright 2015 Hitachi Consulting20 Data Ingestion Compression Compression Extension Codec Splittable Efficacy (Compression Ratio) Speed (to compress) DEFLATE .deflate org.apache.hadoop.io.compress.DefaultCodec No Medium Medium gzip .gz org.apache.hadoop.io.compress.GzipCodec No Medium Medium bzip2 .bz2 org.apache.hadoop.io.compress.BZip2Codec Yes High Low LZO .lzo com.hadoop.compression.lzo.LzopCodec Yes* Low High LZ4 .lz4 org.apache.hadoop.io.compress.Lz4Codec No Low High Snappy .snappy org.apache.hadoop.io.compress.SnappyCodec No Low High Reduces space needed to store files Speeds up data transfer across the network or to or from disk
  • 21. | © Copyright 2015 Hitachi Consulting21 Azure Data Lake Store
  • 22. | © Copyright 2015 Hitachi Consulting22 Big Data Processing  Hive/Pig  U-SQL  Spark  Storm Real-time Stream Processing  Stream Analytics  Event Hubs  Storm  Spark Streaming Data Science & Machine Learning  Azure Machine Learning  Microsoft R Server  Spark ML  Azure Cognitive Services NoSQL Data Stores  Table Storage  HBase  DocumentDB Aggregate/ Calculate Data Discovery & Search  Azure Data Catalog  Azure Search Filter, Reform, Transpose Extract, Load Data Orchestration & Movement  Azure Data Factory High Performance Computing  Azure Batch
  • 23. | © Copyright 2015 Hitachi Consulting23 Azure Data Lake Store Azure Data Lake Services Data Lake Store  Built for data files, rather than Blobs  WebHDFS REST APIs for Hadoop integration  Optimized for distributed analytical workloads  Unlimited Storage, petabyte files  Security: AAD an ACL, Encryption  Built-in Catalog for “virtual” databases and .net assemblies Data Lake Analytics  Big Data Jobs as a Service  Built on Appache YARN  U-SQL  Unit of Cost: running Job HDInsight  Hadoop Cluster as a Service  MapReduce | Pig | Hive | Spark HBase | Oozie | Storm  Unit of Cost: Running nodes
  • 24. | © Copyright 2015 Hitachi Consulting24 Azure Data Lake Store Azure Portal adl://<data_lake_store_name>.azuredatalakestore.net
  • 25. | © Copyright 2015 Hitachi Consulting25 Azure Data Lake Store Ingest Data Data Lake Store Azure Stream Analytics  Real-time  Event-based  From Events Hub/IoT Hub Azure HDInsight Azure Data Factory  Batch  Scheduled  From SQL/NoSQL/File Systems  Sqoop: From SQL  Distcp: From Blob/HDFS  Oozie is used for Scheduling Interactively  Azure Portal  Azure PowerShell  Azure Cross-platform CLI  Data Lake Tools for Visual Studio  AdlCopy Tool: From Blob Programmatically  .Net SDK  Java SDK  Node.js SDK  REST APIs
  • 26. | © Copyright 2015 Hitachi Consulting26 Azure Data Lake Store Secure Data Data Lake Store Authentication  Azure Active Directory (ADD)  Users and service identities defined in your ADD can access your Data Lake Store account,  Key advantages of using Azure Active Directory as a centralized access control mechanism are:  Simplified identity lifecycle management.  Authentication from any client through a standard open protocol, such as OAuth or OpenID.  Federation with enterprise directory services and cloud identity provide Network Isolation  Firewall with IP address range for your trusted clients Authorization  Role-based access control (Owner/Reader/Contributor/Administrator)  Access Control list (Read/Write/Execute)  Files and folders both have Access ACLs Data Protection  In transient - Transport Layer Security (TLS) protocol to secure data over the network.  At rest - provides transparent encryption for data that is stored in the account.  Data Lake Store automatically encrypts data prior to persisting and decrypts data prior to retrieval,  Completely transparent to the client accessing the data
  • 27. | © Copyright 2015 Hitachi Consulting27 Azure Data Lake Store .Net SDK
  • 28. | © Copyright 2015 Hitachi Consulting28 Azure Data Lake Store Data Lake vs Blob Storage https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-comparison-with-blob-storage Azure Data Lake Store Azure Blob Storage Purpose Optimized performance for parallel analytics workloads. High Throughput and IOPS. General purpose object store for a wide variety of storage scenarios Use Cases Batch, interactive, streaming analytics and machine learning data such as log files, IoT data, click streams, large datasets Any type of text or binary data, such as application back end, backup data, media storage for streaming and general purpose data Key Concepts Data Lake Store account contains folders, which in turn contains data stored as files Storage account has containers, which in turn has data in the form of blobs Structure Hierarchical file system Object store with flat namespace HDFS Client Yes Yes Server-side API WebHDFS-compatible REST API Azure Blob Storage REST API Security - Authentication Based on Azure Active Directory Identities Based on shared secrets - Account Access Keys and Shared Access Signature Keys. Security - Authorization POSIX Access Control Lists (ACLs). ACLs based on Azure Active Directory Identities can be set file and folder level. For account-level authorization – Use Account Access Keys For account, container, or blob authorization - Use Shared Access Signature Keys Encryption data at rest Transparent, Server side Transparent, Server side Developer SDKs .NET, Java, Python, Node.js .Net, Java, Python, Node.js, C++, Ruby Size limits No limits on account sizes, file sizes or number of files Specific limits documented here Redundancy Locally-redundant (multiple copies of data in one Azure region) Locally redundant (LRS), globally redundant (GRS), read-access globally redundant (RA-GRS). See here for more information
  • 29. | © Copyright 2015 Hitachi Consulting29 Azure Data Factory
  • 30. | © Copyright 2015 Hitachi Consulting30 Azure Data Factory Cloud data processing & movement services A managed Azure cloud service for building & operating data pipelines Scalable Reliable Pay-as-you use Batch Data Ingestion & Movement Data Processing Orchestration and Scheduling No real-time data collection (use stream Analytics instead) Not a data processing engine, but uses different compute platforms Format conversion, compression, files merge, column mapping
  • 31. | © Copyright 2015 Hitachi Consulting31 Azure Data Factory Concepts & Components Data Factory Linked Service A connection information to a data store or a compute resource Dataset A pointer to the data object to be used as input or an output of an Activity Pipeline logical grouping of Activities with certain sequence & schedule.
  • 32. | © Copyright 2015 Hitachi Consulting32 Azure Data Factory Concepts & Components Data Factory Pipeline Linked Service A connection information to a data store or a compute resource Dataset A pointer to the data object to be used as input or an output of an Activity Pipeline logical grouping of Activities with certain sequence & schedule. Activity actions to perform on data. Input 0-N dataset(s) - Output 1-N dataset(s)
  • 33. | © Copyright 2015 Hitachi Consulting33 Azure Data Factory Concepts & Components Data Factory Pipeline Linked Service A connection information to a data store or a compute resource Dataset A pointer to the data object to be used as input or an output of an Activity Pipeline logical grouping of Activities with certain sequence & schedule. Activity actions to perform on data. Input 0-N dataset(s) - Output 1-N dataset(s) Linked Service Dataset Linked Service Dataset
  • 34. | © Copyright 2015 Hitachi Consulting34 Azure Data Factory Concepts & Components Data Factory Linked Service A connection information to a data store or a compute resource Dataset A pointer to the data object to be used as input or an output of an Activity Pipeline logical grouping of Activities with certain sequence & schedule. PipelineDataset Pipeline Dataset Dataset Dataset Dataset
  • 35. | © Copyright 2015 Hitachi Consulting35 Azure Data Factory Concepts & Components
  • 36. | © Copyright 2015 Hitachi Consulting36 Azure Data Factory Features Linked Service - Compute  Azure HDInsight On-Demand Azure Batch  Azure Machine Learning  Azure Data Lake Analytics  Azure SQL DW (StoredProc)  Azure SQL DB (StoredProc)  SQL Server (StoredProc) Activities  Data Movement (copy Data from Source to sink)  Data Transformation  (on-demand) HDInsight (Hive | Pig | MapReduce | Spark)  Azure ML Batch  Stored Procedure  Data Lake U-SQL Jobs  Custom .NET (Azure Batch) Linked Service – Data  Azure Blob  Azure Table  Azure SQL Database  Azure SQL Data Warehouse  Azure DocumentDB  Azure Data Lake Store  SQL Server  ODBC data sources  Hadoop Distributed File System (HDFS) Data Gateway - Connect to an on-prem data source
  • 37. | © Copyright 2015 Hitachi Consulting37 Azure Data Factory - Taxonomy Dataset
  • 38. | © Copyright 2015 Hitachi Consulting38 Properties Azure Data Factory - Taxonomy Dataset name Dataset Type (AzureBlob, AzureTable, AzureDLS, AzureSQLDB, AzureSQLDW, etc.) Type Properties Linked Service availability structure policy frequency intervals offset external
  • 39. | © Copyright 2015 Hitachi Consulting39 Properties Azure Data Factory - Taxonomy Dataset name Dataset Type (AzureBlob, AzureTable, AzureDLS, AzureSQLDB, AzureSQLDW, etc.) Type Properties Linked Service availability structure policy external Type Properties type Table name Folder Path File Name File Format Compression Partition By SQL File/Blob DocumentDB Collection name
  • 40. | © Copyright 2015 Hitachi Consulting40 Properties Azure Data Factory - Taxonomy Dataset name Dataset Type (AzureBlob, AzureTable, AzureDLS, AzureSQLDB, AzureSQLDW, etc.) Type Properties Linked Service availability structure external policy type MinimumRows MinimumSizeMBFile/Blob SQL
  • 41. | © Copyright 2015 Hitachi Consulting41 Azure Data Factory - Taxonomy Pipeline
  • 42. | © Copyright 2015 Hitachi Consulting42 Azure Data Factory - Taxonomy Pipeline Properties name pipeline Activities [ ] Is Paused End Start Pipeline Mode Expiration Time One time Scheduled
  • 43. | © Copyright 2015 Hitachi Consulting43 Azure Data Factory - Taxonomy Pipeline - Activity Properties name pipeline Activities name Type < Copy | HDInsigh… | SqlServerStoredProcedure | DataLakeAnalyticsU-SQL | MyDotNetActivity | AzureMLBatchExecution Linked Service Outputs [ <datasets>] Inputs [ <datasets>] policy timeout retry Type properties Execution Priority Order
  • 44. | © Copyright 2015 Hitachi Consulting44 Azure Data Factory - Taxonomy Data Movement Activity Type Properties Type: Copy Activity source sink Translator (column Mapping) Type properties Type properties
  • 45. | © Copyright 2015 Hitachi Consulting45 Azure Data Factory - Taxonomy Type Properties • sqlReaderQuery: "$$Text.Format('select * from [Table] where [DataDate] >= {0:yyyyMMdd} AND [DataDate] < {1:yyyyMMdd}', WindowStart, WindowEnd)“ OR • sqlReaderStoredProcedureName • sqlReaderStoredProcedureParameters Type: SqlSource Source Type Properties • sqlWriteCleanupScript (optional) • sqlWriteStoredProcedure (takes the input data as a table-valued parameter) • sqlWriteStoredProcedureParameters - not supported in AzureSQLDW • sqlWriteTableType (input data table tpye - pre- defined in sql) - not supported in AzureSQLDW • sliceDateTimeColumnIdentifier (binary) - not supported in AzureSQLDW • writeBatchSize Type: SqlSink Sink Data Movement Activity – SQL & Blob Type Properties recursive: true:false Type: BlobSource Source Type Properties copyBehaviour (preserveHierarchy|FlattenHeirarchy|Merge) Type: BlobSink Sink
  • 46. | © Copyright 2015 Hitachi Consulting46 Azure Data Factory - Taxonomy Copy Performance Tuning Elements concurrency parallelCopies cloudMovementUnits EnableStaging activity runs multiple data slices concurrently – data migration multiple threads, depending on source/sink Useful when moving data from on-prem to cloud (1,2,4,8) - CPU/Memory allocation
  • 47. | © Copyright 2015 Hitachi Consulting47 Data Lake Ingestion Scenarios with Azure Data Factory
  • 48. | © Copyright 2015 Hitachi Consulting48 Azure Data Lake Ingestion Scenarios Blob Storage SqlDB Events Hub Stream Analytics Table Storage Custom Event Consumer (Azure Function) Consume messages Insert each event as a new record Consume a set of events using Tumbling Window Extract records Where <ModifiedDate> between WindowStart and WindowEnd Preserve source folder/files hierarchy, Flatten Hierarchy (in the file name), or Merge all source files in one file in the destination Azure Data Lake Get Existing files in specific source location Where some name format
  • 49. | © Copyright 2015 Hitachi Consulting49 Azure Data Lake Ingestion Scenarios Data Factory - ADLS Dataset Definition { "name": "output-azdlsfolder-contoso_customer", "properties": { "published": false, "type": "AzureDataLakeStore", "linkedServiceName": "azdls_kspocs", "typeProperties": { "folderPath": "contoso/customers/{Year}/{Month}/{Day}", "format": { "type": "TextFormat", "rowDelimiter": "n", "columnDelimiter": "|", "nullValue": "NA", "quoteChar": """, "firstRowAsHeader": true }, "partitionedBy": [ {"name": "Year", "value": {"type": "DateTime", "date": "SliceStart","format": "yyyy"}}, {"name": "Month", value": {"type": "DateTime", "date": "SliceStart","format": "MM"}}, {"name": "Day", "value": {"type": "DateTime", "date": "SliceStart","format": "dd"}} ], "compression": {"type": "GZip","level": "Optimal"} }, "availability": { "frequency": "Day", "interval": 1 }, "external": false } } Define Destination File Format, Compression, and Partitioning - in the DL Store
  • 50. | © Copyright 2015 Hitachi Consulting50 Azure Data Lake Ingestion Scenarios Data Factory – Copy Activity (SqlSource) "activities": [ { "type": "Copy", "typeProperties": { "source": { "type": "SqlSource", "sqlReaderQuery": "$$Text.Format('select * from cso.FactOnlineSales where CAST(CONVERT(VARCHAR(10), DateKey, 112) AS INT) >= {0:yyyyMMdd} AND CAST(CONVERT(VARCHAR(10), DateKey, 112) AS INT) < {1:yyyyMMdd}', WindowStart, WindowEnd)" }, "sink": { "type": "AzureDataLakeStoreSink", "writeBatchSize": 0,"writeBatchTimeout": "00:00:00" } }, "inputs": [{"name": "input-azsqldbtable-contoso_sales"}], "outputs": [{"name": "output-azdlsfolder-contoso_sales"}], "policy": { "timeout": "01:00:00", "concurrency": 1, "retry": 1 }, "scheduler": { "frequency": "Day", "interval": 1 }, "name": "act-copy-sales_SQLDB-DLS" } ] Parameterizing the fetch query to get only the current data slice (based on the WindowStart & WindowEnd)
  • 51. | © Copyright 2015 Hitachi Consulting51 Azure Data Lake Ingestion Scenarios Data Factory – Copy Activity (BlobSource) { "name": "contoso_product-pipeline", "properties": { "activities": [ { "type": "Copy", "typeProperties": { "source": { "type": "BlobSource", "recursive": true }, "sink": { "type": "AzureDataLakeStoreSink", "copyBehavior": "MergeFiles", } }, "inputs": [{"name": "input-azstorage-contoso_product"}], "outputs": [{"name": "output-azdlsfolder-contoso_product_v3"}], "policy": {"timeout": "1.00:00:00","concurrency": 1,"retry": 1}, "scheduler": {"frequency": "Day","interval": 1}, "name": "act-copy-customer_AZS->DLS" } ], "start": "2017-01-06T08:00:00.0106386Z", "end": "2017-01-07T08:00:00.0106386Z", "isPaused": false, "pipelineMode": "Scheduled" } } Merges all the fetched files data into one file to be stored in the destination – Other Options: PerserveHierarchy & Flatten Fetches all the files with in the input dataset specified path (folder) and sub folders
  • 52. | © Copyright 2015 Hitachi Consulting52 Azure Data Lake Ingestion Scenarios Stream Analytics – From Events Hub
  • 53. | © Copyright 2015 Hitachi Consulting53 Azure Data Lake Analytics
  • 54. | © Copyright 2015 Hitachi Consulting54 Azure Data Lake Analytics Big Data Processing Jobs Big Data Processing Jobs as Service U-SQL: simple and familiar, powerful, and extensible Affordable and cost effective: Pay as you use Dynamic scaling Integrates seamlessly with your IT investments Deploy on Azure & schedule using Data Factory Develop faster, debug, and optimize smarter using familiar tool
  • 55. | © Copyright 2015 Hitachi Consulting55 Azure Data Lake Analytics U-SQL Natively Parallel Extensible (User-defined Functions, Operators and Aggregate Functions) C# meets SQL (.NET data type, C# expressions + SQL set statements) Set-based and procedural processing Works with structured and unstructured data Reuse of custom build .NET assemblies
  • 56. | © Copyright 2015 Hitachi Consulting56 Azure Data Lake Analytics U-SQL – Processing Model  Create a Managed Data Model (database, schemas, tables, functions)  Only a schema on top of existing files  Allow querying data files using SQL on top with tables/views  Reuse of procs and functions  Retrieve data:  From data lake files using EXTRACT – apply schema on read  From Managed table using SELECT  From external data sources (Azure SQL DB/DW, SQLServer)  Data is retrieved as @rowset  A rowset is processed (aggregated, joined with other rowset, etc.) using SQL and C# to produce another rowset  A rowset is stored:  As data lake file using OUTPUT  In managed table using INSERT (apply schema on write) or CREAT TABLE AS SELECT
  • 57. | © Copyright 2015 Hitachi Consulting57 Azure Data Lake Analytics U-SQL – Processing Model Create Managed Data Model Database, Schemas Tables, Index, Views Functions, Procs Data Sources External Tables Azure SQLDB Azure SQLDW SQLSvr INSERT/ CTAS* EXTRACT Data Lake file/ directory @rowset @rowset Transform, join, explode, aggregate, process, … @rowset OUTPUT Data Lake file/ directory SELECT *CTAS: CREATE TABLE AS SELECT
  • 58. | © Copyright 2015 Hitachi Consulting58 Azure Data Lake Analytics U-SQL – Script Structure  Use statement: USE Database  Declare Variables: DECLARE @input string = <File Location>  Register Assembly: CREATE ASSEMBLY <myDotNetLib> FROM ”<dll location>  Reference Assembly: REFERENCE ASSEMBLY <myDotNetLib>  DLL Statement: CREATE TABLE | VIEW |… Create Table: Index, Partition, Distribution  Query statement: @rowset = SELECT … FROM <@set>|<table>  Extract statement: @rowset = EXTRACT <columns> FROM <file location> USING <Extractor>  DML Statement: INSERT INTO <Table> SELECT … FROM @rowset  CTAS: CREATE TABLE <Table> AS SELECT … FROM @rowset  Output Statement: OUTPUT @rowset TO <file location> USING <Outputer>
  • 59. | © Copyright 2015 Hitachi Consulting59 Azure Data Lake Analytics Dev and Test U-SQL locally (using Visual Studio Data Lake Analytics Tools)
  • 60. | © Copyright 2015 Hitachi Consulting60 Azure Data Lake Analytics U-SQL Example – data aggregation USE DATABASE master; CREATE DATABASE IF NOT EXISTS contoso; USE DATABASE contoso; CREATE SCHEMA IF NOT EXISTS mrt; DROP TABLE IF EXISTS mrt.sales; CREATE TABLE mrt.sales ( Year int, Month int, Country string, ProductSubCategory string, TotalSales double, INDEX index_sales CLUSTERED(ProductSubCategory ASC) PARTITIONED BY (Year,Month) DISTRIBUTED BY HASH (Country) ); DECLARE @datafolder string = "sample-data/data-files/{*).txt"; @data = EXTRACT identifier string, value double FROM @datafolder USING Extractors.Csv(); @result = SELECT identifier, DateTime.Now AS timestamp, SUM(value) AS total FROM @data GROUP BY identifier, DateTime.Now; USE DATABASE contoso; DROP TABLE IF EXISTS dbo.test; CREATE TABLE dbo.test ( INDEX [idx_test] CLUSTERED (identifier ASC) PARTITIONED BY HASH (identifier) ) AS SELECT * FROM @result; Create Managed Model Proccess Data
  • 61. | © Copyright 2015 Hitachi Consulting61 Azure Data Lake Analytics U-SQL Example – data aggregation
  • 62. | © Copyright 2015 Hitachi Consulting62 Azure Data Lake Analytics U-SQL – Example Word Count DECLARE @input_file string = "sample-data/text-input.txt"; DECLARE @out_file string = "sample-data/text-output.txt"; @raw = EXTRACT line string FROM @input_file USING Extractors.Csv(); @arrays = SELECT new SQL.ARRAY<string>(line.Split(' ').Where(word=>word.Length>1)) AS words FROM @raw; @map = SELECT 1 AS count, tmp.word.ToString().ToUpper() AS word FROM @arrays AS arrays CROSS APPLY EXPLODE(arrays.words) AS tmp(word); @reduce = SELECT map.word, SUM(map.count) AS [Count] FROM @map AS map GROUP BY map.word; OUTPUT @reduce TO @out_file ORDER BY [Count] DESC USING Outputters.Tsv();
  • 63. | © Copyright 2015 Hitachi Consulting63 U-SQL .net Extensibility
  • 64. | © Copyright 2015 Hitachi Consulting64 Azure Data Lake Analytics U-SQL – .net extensibility  Code-behind (script.usql.cs) – Write Static Methods and use in the u-sql script  Assemblies  Create dlls with processing logic  Upload dlls to dls  Register assemblies in a data lake managed database  Reference assembly in u-sql script and use code Implement custom:  Extractors  Outputters  Aggregators  Appliers  Combiners  Processor
  • 65. | © Copyright 2015 Hitachi Consulting65 Azure Data Lake Analytics U-SQL – .net extensibility | terms frequency example Managed Model USE DATABASE master; CREATE DATABASE IF NOT EXISTS demo; USE DATABASE demo; CREATE SCHEMA IF NOT EXISTS txt; DROP TABLE IF EXISTS txt.termstats; CREATE TABLE txt.termstats ( Term string, Frequency long?, AverageFrequency double, VarianceFromAverage double, Timestamp DateTime, Year int, Month int, Day int, INDEX index_termstats CLUSTERED(Term ASC) PARTITIONED BY (Year,Month,Day) DISTRIBUTED BY HASH (Term) );
  • 66. | © Copyright 2015 Hitachi Consulting66 Azure Data Lake Analytics U-SQL – .net extensibility | terms frequency example Implement Full-line Extractor
  • 67. | © Copyright 2015 Hitachi Consulting67 Azure Data Lake Analytics U-SQL – .net extensibility | terms frequency example Registering Assembly register-assemblies.usql Reference, and use the Assembly process-termstats.usql
  • 68. | © Copyright 2015 Hitachi Consulting68 Azure Data Lake Analytics U-SQL – .net extensibility | terms frequency example Code-behind
  • 69. | © Copyright 2015 Hitachi Consulting69 Azure Data Lake Analytics U-SQL – .net extensibility | term frequency example DECLARE @input_file string = "demo-data/doc1.txt"; USE DATABASE demo; REFERENCE ASSEMBLY DLAServices; USING MyServices = DLAnalytics.Services; @raw = EXTRACT line string FROM @input_file USING new MyServices.FullLineExtractor(); @arrays = SELECT new SQL.ARRAY<string>(DLAnalytics.Demo.Tokenizer.Tokenize(2,line)) AS terms FROM @raw; @map = SELECT 1 AS frequency, tmp.term.ToUpper() AS term FROM @arrays AS arrays CROSS APPLY EXPLODE(arrays.terms) AS tmp(term); @reduce = SELECT map.term, SUM(map.frequency) AS frequency FROM @map AS map GROUP BY map.term HAVING SUM(map.frequency) >= 3; @aggregates = SELECT Convert.ToDouble(AVG(frequency)) AS averageFrequency FROM @reduce; @output = SELECT term, frequency, averageFrequency, Convert.ToDouble(frequency - averageFrequency) AS varianceFromAverage, DateTime.Now AS timestamp FROM @reduce CROSS JOIN @aggregates; DECLARE @Year int = DateTime.Now.Year; DECLARE @Month int = DateTime.Now.Month; DECLARE @Day int = DateTime.Now.Day; ALTER TABLE txt.termstats ADD IF NOT EXISTS PARTITION(@Year, @Month, @Day); INSERT INTO txt.termstats (Term, Frequency, AverageFrequency, VarianceFromAverage, Timestam ) PARTITION ( @Year, @Month, @Day ) SELECT * FROM @output; process-termstats.usql
  • 70. | © Copyright 2015 Hitachi Consulting70 Running U-SQL Jobs on Azure
  • 71. | © Copyright 2015 Hitachi Consulting71 Azure U-SQL Jobs Simplified Workflow Job Front End Job Scheduler Compiler Service Job Queue Job Manager U-SQL Catalog YARN Job submission Job execution U-SQL Runtime Vertex execution Metadata Resource Manager • Max Concurrent Jobs = 50 • Max AUs per job = 120 • Max Queue Length = 200  Parallelism = N Analytics Units (AU)s  1 AU ~= 2 cores and 6 GB of memory https://blogs.msdn.microsoft.com/azured atalake/2016/10/12/azure-data-lake- analytics-more-compute-and-more- control
  • 72. | © Copyright 2015 Hitachi Consulting72 Azure U-SQL Jobs Monitoring Jobs running on Azure
  • 73. | © Copyright 2015 Hitachi Consulting73 Azure U-SQL Jobs Job Status Preparing Queued Running Finalizing Ended (Succeeded, Failed, Cancelled) New Compiling Queued Scheduling Starting Running Ended The script is being compiled by the Compiler Service All jobs enter the queue. Are there enough ADLAUs to start the job? If yes, then allocate those ADLAUs for the job The U-SQL runtime is now executing the code on 1 or more ADLAUs or finalizing the outputs The job has concluded.
  • 74. | © Copyright 2015 Hitachi Consulting74 Azure U-SQL Jobs Monitoring Jobs running on Azure Job files: system/jobservice/jobs/yyyy/MM/dd/jobid/..
  • 75. | © Copyright 2015 Hitachi Consulting75 Azure U-SQL Jobs Monitoring Jobs running on Azure
  • 76. | © Copyright 2015 Hitachi Consulting76 Azure U-SQL Jobs Running U-SQL Jobs Using Azure Data Factory { "name": "pipeline-dla-test", "properties": { "activities": [ { "type": "DataLakeAnalyticsU-SQL", "typeProperties": { "scriptPath": “pipelinestestprocess-data.usql", "scriptLinkedService": "StorageLinkedService", "degreeOfParallelism": 3, "priority": 100, "parameters": { "in": "/datalake/input/data-input.tsv", "out": "/datalake/output/data-output.tsv" } }, Parameters can be passed to the U-SQL Script to be executed
  • 77. | © Copyright 2015 Hitachi Consulting77 My Background Applying Computational Intelligence in Data Mining • Honorary Research Fellow, School of Computing , University of Kent. • Ph.D. Computer Science, University of Kent, Canterbury, UK. • M.Sc. Computer Science , The American University in Cairo, Egypt. • 25+ published journal and conference papers, focusing on: – classification rules induction, – decision trees construction, – Bayesian classification modelling, – data reduction, – instance-based learning, – evolving neural networks, and – data clustering • Journals: Swarm Intelligence, Swarm & Evolutionary Computation, , Applied Soft Computing, and Memetic Computing. • Conferences: ANTS, IEEE CEC, IEEE SIS, EvoBio, ECTA, IEEE WCCI and INNS-BigData. ResearchGate.org
  • 78. | © Copyright 2015 Hitachi Consulting78 Thank you!