MapReduce

Big Data and Hadoop Training
MapReduce
Page 2Classification: Restricted
Agenda
• Meet MapReduce
• Word Count Algorithm – Traditional approach
• Traditional approach on a Distributed System
• Traditional approach – Drawbacks
• MapReduce Approach
• Input & Output Forms of a MR program
• Map, Shuffle & Sort, Reduce Phase
• WordCount Code walkthrough
• Workflow & Transformation of Data
• Input Split & HDFS Block
• Relation between Split & Block
• Data locality Optimization
• Speculative Execution
• MR Flow with Single Reduce Task
• MR flow with multiple Reducers
• Input Format & Hierarchy
• Output Format & Hierarchy
Page 3Classification: Restricted
Meet MapReduce
• MapReduce is a programming model for distributed processing
• Advantage - easy scaling of data processing over multiple computing nodes
• The basic entities in this model are – mappers & reducers
• Decomposing a data processing application into mappers and reducers
is the task of developer
• once you write an application in the MapReduce form, scaling the application to run over hundreds,
thousands, or even tens of thousands of machines in a cluster is merely a configuration change
Page 4Classification: Restricted
WordCount – Traditional Approach
• Input: do as I say not as I do
• Output:
Word Count
as 2
do 2
I 2
not 1
say 1
Page 5Classification: Restricted
WordCount – Traditional Approach
define wordCount as Multiset;
for each document in documentSet {
T = tokenize(document);
for each token in T {
wordCount[token]++;
}
}
display(wordCount);
Page 6Classification: Restricted
Traditional Approach – Distributed Processing
define wordCount as Multiset;
for each document in documentSubset {
< same code as in perv.slide>
}
sendToSecondPhase(wordCount);
define totalWordCount as Multiset;
for each wordCount received from firstPhase {
multisetAdd (totalWordCount, wordCount);
}
Page 7Classification: Restricted
Traditional Approach – Drawbacks
•Central Storage – bottleneck in bandwidth of the server
•Multiple Storage – handling splits
•Program runs in memory
•When processing large document sets, the number of unique
words can exceed the RAM storage of a machine
•Phase 2 handling by one machine?
•If Multiple machines are used for phase-2, how to partition the
data?
Page 8Classification: Restricted
Mapreduce Approach
• Has two execution phases – mapping & reducing
• These phases are defined by data processing functions called – mapper & reducer
• Mapping phase – MR takes the input data and feeds each data element to the mapper
• Reducing phase – reducer processes all the outputs from the mapper and arrives at a final result
Page 9Classification: Restricted
Input & Output forms:
• In order for mapping, reducing, partitioning, and shuffling (and a few others that were not
mentioned) to seamlessly work together, we need to agree on a common structure for the data being
processed
• InputFormat class is responsible for creating input splits and dividing them into records()
Input Output
map() <k1, v1> list(<k2, v2>)
reduce() <k2, list(v2)> list(<k3, v3>)
Page 10Classification: Restricted
Map Phase
Page 11Classification: Restricted
Reduce Phase
Page 12Classification: Restricted
Shuffle & sort Phase
Page 13Classification: Restricted
MR - Work flow & Transformation of data
From i/p files to
the mapper
From the
Mapper to the
intermediate
results
From
intermediate
results to the
reducer
From the
reducer to
output files
Page 14Classification: Restricted
Word Count: Source Code
Page 15Classification: Restricted
Input Split & Hdfs Block
Data Chunk
HDFS Block
(Physical Division)
Input Split
(Logical Division)
Page 16Classification: Restricted
Relation Between Input Split & Hdfs Block
1 2 3 4 76 8 1095
File
Line
s
Block
Boundary
Block
Boundary
Block
Boundary
Block
Boundary
Split Split Split
• Logical records do not fit neatly into the HDFS blocks.
• Logical records are lines that cross the boundary of the blocks.
• First split contains line 5 although it spans across blocks.
Page 17Classification: Restricted
Data locality Optimization
• MR job is split into various map & reduce
tasks
• Map tasks run on the input splits
• Ideally, the task JVM would be initiated in the
node where the split/block of data exists
• While in some scenarios, JVMs might not be
free to accept another task.
• In that case, Task Tracker will be initiated at a
different location.
• Scenario a) Same node execution
• Scenario b) Off-node execution
• Scenario c) Off-rack execution
Page 18Classification: Restricted
Speculative execution
• MR job is split into various map & reduce tasks and they get executed in parallel.
• Overall job execution time is pulled down by the slowest task.
• Hadoop doesn’t try to diagnose and fix slow-running tasks; instead, it tries to detect when a task is
running slower than expected and launches another equivalent task as a backup. This is
termed speculative execution of tasks.
Page 19Classification: Restricted
MapReduce Dataflow With A Single Reduce Task
Page 20Classification: Restricted
Map Reduce Dataflow With Multiple Reduce Tasks
Page 21Classification: Restricted
MapReduce Dataflow With No Reduce Tasks
Page 22Classification: Restricted
Combiner
• A combiner is a mini-reducer
• It gets executed on the mapper output at the mapper side
• Combiner’s output is fed to Reducer
• As the mapper output is further refined using combiner, data that has to be shuffled across the
cluster is minimized
• Because the combiner function is an optimization, Hadoop does not provide a guarantee of how
many times it will call it for a particular map output record,
if at all
• So, calling the combiner function zero, one, or many times should produce the same output from the
reducer.
Page 23Classification: Restricted
Combiner’s Contract
• Only those functions that obey commutative & associative properties can use combiners.
• Because
max(0, 20, 10, 25, 15) = max(max(0, 20, 10), max(25, 15)) = max(20, 25) = 25
where as,
mean(0, 20, 10, 25, 15) = 14 and
mean(mean(0, 20, 10), mean(25, 15)) = mean(10, 20) = 15
Page 24Classification: Restricted
Partitioner
• We know that a unique key will always go to a unique reducer.
• Partitioner is responsible for sending key, value pairs to a reducer based on the key content.
• The default partitioner is Hash-partitioner. It takes mapper output, create a Hash value for each key
and divide it modulo by the number of reducers. The output of this calculation will determine the
reducer that this particular key would go to
Page 25Classification: Restricted
Partitioner
Mapper
Mapper
Mapper
Reducer
Reducer
Reducer
Partitioner
Partitioner
Partitioner
Page 26Classification: Restricted
InputFormat Hierarchy
Page 27Classification: Restricted
InputFormat
Input Split Input SplitInput SplitInput Split
Record
Reader
Record
Reader
Record
Reader
Record
Reader
Mapper MapperMapperMapper
Page 28Classification: Restricted
OutputFormat
Reducer
Output File
Reducer ReducerReducer
RcordWriter RcordWriterRcordWriterRcordWriter
Output FileOutput FileOutput File
Page 29Classification: Restricted
OutputFormat Hierarchy
Page 30Classification: Restricted
Counters
• Counters are a useful channel for gathering statistics about the job: for quality control or for
application-level statistics.
• Often used for debugging purpose.
• eg: Count number of Good records, bad records in the input
• Two types – Built-in & Custom Counters
• Examples of Built-in Counters:
• Map input records
• Map output records
• Filesystem bytes read
• Launched map tasks
• Failed map tasks
• Killed reduce tasks
Page 31Classification: Restricted
Joins
• Map-side join(Replication): A map-side join that works in situations where one of the datasets is
small enough to cache
• Reduce-side join(Repartition join): A reduce-side join for situations where you’re joining two or
more large datasets together
• Semi-join(A map-side join): Another map-side join where one dataset is initially too large to fit into
memory, but after some filtering
can be reduced down to a size that can fit in memory
Page 32Classification: Restricted
Distributed Cache
• Side data can be defined as extra read-only data needed by a job to process the main dataset
• To make side data available to all map or reduce tasks, we distribute those datasets using Hadoop’s
Distributed Cache mechanism.
pavan.hadoop@outlook.com
Page 33Classification: Restricted
Map Join (Using Distributed Cache)
Page 34Classification: Restricted
Some Useful Links:
• http://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html
• http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-
core/MapReduceTutorial.html
Page 35Classification: Restricted
Thank You
1 von 35

Recomendados

Making_Good_Enough...Better-Addressing_the_Multiple_Objectives_of_High-Perfor... von
Making_Good_Enough...Better-Addressing_the_Multiple_Objectives_of_High-Perfor...Making_Good_Enough...Better-Addressing_the_Multiple_Objectives_of_High-Perfor...
Making_Good_Enough...Better-Addressing_the_Multiple_Objectives_of_High-Perfor...John Gunnels
440 views68 Folien
Apache SystemML Architecture by Niketan Panesar von
Apache SystemML Architecture by Niketan PanesarApache SystemML Architecture by Niketan Panesar
Apache SystemML Architecture by Niketan PanesarArvind Surve
295 views38 Folien
48a tuning von
48a tuning48a tuning
48a tuningmapr-academy
1.3K views18 Folien
Hadoop map reduce v2 von
Hadoop map reduce v2Hadoop map reduce v2
Hadoop map reduce v2Subhas Kumar Ghosh
477 views11 Folien
My mapreduce1 presentation von
My mapreduce1 presentationMy mapreduce1 presentation
My mapreduce1 presentationNoha Elprince
810 views39 Folien
Map reduce - simplified data processing on large clusters von
Map reduce - simplified data processing on large clustersMap reduce - simplified data processing on large clusters
Map reduce - simplified data processing on large clustersCleverence Kombe
882 views20 Folien

Más contenido relacionado

Was ist angesagt?

E031201032036 von
E031201032036E031201032036
E031201032036ijceronline
308 views5 Folien
운영체제론 - Ch09 von
운영체제론 - Ch09운영체제론 - Ch09
운영체제론 - Ch09Jongmyoung Kim
228 views41 Folien
MapReduce von
MapReduceMapReduce
MapReduceSurinder Kaur
58 views8 Folien
MapReduce : Simplified Data Processing on Large Clusters von
MapReduce : Simplified Data Processing on Large ClustersMapReduce : Simplified Data Processing on Large Clusters
MapReduce : Simplified Data Processing on Large ClustersAbolfazl Asudeh
894 views15 Folien
33734947 sap-pp-tables von
33734947 sap-pp-tables33734947 sap-pp-tables
33734947 sap-pp-tablesSwapnil Rajane
289 views1 Folie
"MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ... von
"MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ..."MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ...
"MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ...Adrian Florea
585 views21 Folien

Was ist angesagt?(20)

MapReduce : Simplified Data Processing on Large Clusters von Abolfazl Asudeh
MapReduce : Simplified Data Processing on Large ClustersMapReduce : Simplified Data Processing on Large Clusters
MapReduce : Simplified Data Processing on Large Clusters
Abolfazl Asudeh894 views
"MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ... von Adrian Florea
"MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ..."MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ...
"MapReduce: Simplified Data Processing on Large Clusters" Paper Presentation ...
Adrian Florea585 views
MapReduce: Simplified Data Processing On Large Clusters von kazuma_sato
MapReduce: Simplified Data Processing On Large ClustersMapReduce: Simplified Data Processing On Large Clusters
MapReduce: Simplified Data Processing On Large Clusters
kazuma_sato342 views
benchmarks-sigmod09 von Hiroshi Ono
benchmarks-sigmod09benchmarks-sigmod09
benchmarks-sigmod09
Hiroshi Ono849 views
An Enhanced MapReduce Model (on BSP) von Yu Liu
An Enhanced MapReduce Model (on BSP)An Enhanced MapReduce Model (on BSP)
An Enhanced MapReduce Model (on BSP)
Yu Liu310 views
Avoiding Data Hotspots at Scale von ScyllaDB
Avoiding Data Hotspots at ScaleAvoiding Data Hotspots at Scale
Avoiding Data Hotspots at Scale
ScyllaDB465 views
SOME WORKLOAD SCHEDULING ALTERNATIVES 11.07.2013 von James McGalliard
SOME WORKLOAD SCHEDULING ALTERNATIVES 11.07.2013SOME WORKLOAD SCHEDULING ALTERNATIVES 11.07.2013
SOME WORKLOAD SCHEDULING ALTERNATIVES 11.07.2013
James McGalliard255 views
Oracle Database 12c features for DBA von Karan Kukreja
Oracle Database 12c features for DBAOracle Database 12c features for DBA
Oracle Database 12c features for DBA
Karan Kukreja292 views
Block Sampling: Efficient Accurate Online Aggregation in MapReduce von Vasia Kalavri
Block Sampling: Efficient Accurate Online Aggregation in MapReduceBlock Sampling: Efficient Accurate Online Aggregation in MapReduce
Block Sampling: Efficient Accurate Online Aggregation in MapReduce
Vasia Kalavri599 views
Cloud schedulers and Scheduling in Hadoop von Pallav Jha
Cloud schedulers and Scheduling in HadoopCloud schedulers and Scheduling in Hadoop
Cloud schedulers and Scheduling in Hadoop
Pallav Jha1.1K views
Taming Latency: Case Studies in MapReduce Data Analytics von EMC
Taming Latency: Case Studies in MapReduce Data AnalyticsTaming Latency: Case Studies in MapReduce Data Analytics
Taming Latency: Case Studies in MapReduce Data Analytics
EMC2.3K views

Similar a MapReduce

MapReduce von
MapReduceMapReduce
MapReduceKavyaGo
84 views47 Folien
Challenges of Building a First Class SQL-on-Hadoop Engine von
Challenges of Building a First Class SQL-on-Hadoop EngineChallenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop EngineNicolas Morales
1.9K views24 Folien
Big Data.pptx von
Big Data.pptxBig Data.pptx
Big Data.pptxNelakurthyVasanthRed1
182 views46 Folien
11. From Hadoop to Spark 1:2 von
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2Fabio Fumarola
7.6K views63 Folien
Challenges of Implementing an Advanced SQL Engine on Hadoop von
Challenges of Implementing an Advanced SQL Engine on HadoopChallenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on HadoopDataWorks Summit
2.2K views24 Folien
Hadoop performance optimization tips von
Hadoop performance optimization tipsHadoop performance optimization tips
Hadoop performance optimization tipsSubhas Kumar Ghosh
879 views18 Folien

Similar a MapReduce(20)

MapReduce von KavyaGo
MapReduceMapReduce
MapReduce
KavyaGo84 views
Challenges of Building a First Class SQL-on-Hadoop Engine von Nicolas Morales
Challenges of Building a First Class SQL-on-Hadoop EngineChallenges of Building a First Class SQL-on-Hadoop Engine
Challenges of Building a First Class SQL-on-Hadoop Engine
Nicolas Morales1.9K views
11. From Hadoop to Spark 1:2 von Fabio Fumarola
11. From Hadoop to Spark 1:211. From Hadoop to Spark 1:2
11. From Hadoop to Spark 1:2
Fabio Fumarola7.6K views
Challenges of Implementing an Advanced SQL Engine on Hadoop von DataWorks Summit
Challenges of Implementing an Advanced SQL Engine on HadoopChallenges of Implementing an Advanced SQL Engine on Hadoop
Challenges of Implementing an Advanced SQL Engine on Hadoop
DataWorks Summit2.2K views
Big Data Analytics Chapter3-6@2021.pdf von WasyihunSema2
Big Data Analytics Chapter3-6@2021.pdfBig Data Analytics Chapter3-6@2021.pdf
Big Data Analytics Chapter3-6@2021.pdf
WasyihunSema243 views
MapReduce presentation von Vu Thi Trang
MapReduce presentationMapReduce presentation
MapReduce presentation
Vu Thi Trang374 views
Hadoop and HBase experiences in perf log project von Mao Geng
Hadoop and HBase experiences in perf log projectHadoop and HBase experiences in perf log project
Hadoop and HBase experiences in perf log project
Mao Geng832 views
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014 von cdmaxime
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
cdmaxime741 views
A Comparative Performance Evaluation of Apache Flink von Dongwon Kim
A Comparative Performance Evaluation of Apache FlinkA Comparative Performance Evaluation of Apache Flink
A Comparative Performance Evaluation of Apache Flink
Dongwon Kim7.5K views
Dongwon Kim – A Comparative Performance Evaluation of Flink von Flink Forward
Dongwon Kim – A Comparative Performance Evaluation of FlinkDongwon Kim – A Comparative Performance Evaluation of Flink
Dongwon Kim – A Comparative Performance Evaluation of Flink
Flink Forward12.6K views
GoMR: A MapReduce Framework for Go von ConnorZanin
GoMR: A MapReduce Framework for GoGoMR: A MapReduce Framework for Go
GoMR: A MapReduce Framework for Go
ConnorZanin158 views
Apache Spark - San Diego Big Data Meetup Jan 14th 2015 von cdmaxime
Apache Spark - San Diego Big Data Meetup Jan 14th 2015Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015
cdmaxime731 views
Cloud infrastructure. Google File System and MapReduce - Andrii Vozniuk von Andrii Vozniuk
Cloud infrastructure. Google File System and MapReduce - Andrii VozniukCloud infrastructure. Google File System and MapReduce - Andrii Vozniuk
Cloud infrastructure. Google File System and MapReduce - Andrii Vozniuk
Andrii Vozniuk1.9K views

Último

Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue von
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueShapeBlue
176 views20 Folien
State of the Union - Rohit Yadav - Apache CloudStack von
State of the Union - Rohit Yadav - Apache CloudStackState of the Union - Rohit Yadav - Apache CloudStack
State of the Union - Rohit Yadav - Apache CloudStackShapeBlue
253 views53 Folien
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT von
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITShapeBlue
166 views8 Folien
Business Analyst Series 2023 - Week 4 Session 7 von
Business Analyst Series 2023 -  Week 4 Session 7Business Analyst Series 2023 -  Week 4 Session 7
Business Analyst Series 2023 - Week 4 Session 7DianaGray10
126 views31 Folien
Confidence in CloudStack - Aron Wagner, Nathan Gleason - Americ von
Confidence in CloudStack - Aron Wagner, Nathan Gleason - AmericConfidence in CloudStack - Aron Wagner, Nathan Gleason - Americ
Confidence in CloudStack - Aron Wagner, Nathan Gleason - AmericShapeBlue
88 views9 Folien
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... von
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...ShapeBlue
138 views18 Folien

Último(20)

Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue von ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
ShapeBlue176 views
State of the Union - Rohit Yadav - Apache CloudStack von ShapeBlue
State of the Union - Rohit Yadav - Apache CloudStackState of the Union - Rohit Yadav - Apache CloudStack
State of the Union - Rohit Yadav - Apache CloudStack
ShapeBlue253 views
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT von ShapeBlue
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
ShapeBlue166 views
Business Analyst Series 2023 - Week 4 Session 7 von DianaGray10
Business Analyst Series 2023 -  Week 4 Session 7Business Analyst Series 2023 -  Week 4 Session 7
Business Analyst Series 2023 - Week 4 Session 7
DianaGray10126 views
Confidence in CloudStack - Aron Wagner, Nathan Gleason - Americ von ShapeBlue
Confidence in CloudStack - Aron Wagner, Nathan Gleason - AmericConfidence in CloudStack - Aron Wagner, Nathan Gleason - Americ
Confidence in CloudStack - Aron Wagner, Nathan Gleason - Americ
ShapeBlue88 views
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... von ShapeBlue
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
ShapeBlue138 views
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ... von ShapeBlue
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
ShapeBlue146 views
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ... von ShapeBlue
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
Backup and Disaster Recovery with CloudStack and StorPool - Workshop - Venko ...
ShapeBlue144 views
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda... von ShapeBlue
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
Hypervisor Agnostic DRS in CloudStack - Brief overview & demo - Vishesh Jinda...
ShapeBlue120 views
Future of AR - Facebook Presentation von Rob McCarty
Future of AR - Facebook PresentationFuture of AR - Facebook Presentation
Future of AR - Facebook Presentation
Rob McCarty62 views
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue von ShapeBlue
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlueWhat’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue
What’s New in CloudStack 4.19 - Abhishek Kumar - ShapeBlue
ShapeBlue222 views
The Role of Patterns in the Era of Large Language Models von Yunyao Li
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language Models
Yunyao Li80 views
CloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&T von ShapeBlue
CloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&TCloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&T
CloudStack and GitOps at Enterprise Scale - Alex Dometrius, Rene Glover - AT&T
ShapeBlue112 views
Data Integrity for Banking and Financial Services von Precisely
Data Integrity for Banking and Financial ServicesData Integrity for Banking and Financial Services
Data Integrity for Banking and Financial Services
Precisely78 views
DRBD Deep Dive - Philipp Reisner - LINBIT von ShapeBlue
DRBD Deep Dive - Philipp Reisner - LINBITDRBD Deep Dive - Philipp Reisner - LINBIT
DRBD Deep Dive - Philipp Reisner - LINBIT
ShapeBlue140 views
Digital Personal Data Protection (DPDP) Practical Approach For CISOs von Priyanka Aash
Digital Personal Data Protection (DPDP) Practical Approach For CISOsDigital Personal Data Protection (DPDP) Practical Approach For CISOs
Digital Personal Data Protection (DPDP) Practical Approach For CISOs
Priyanka Aash153 views
NTGapps NTG LowCode Platform von Mustafa Kuğu
NTGapps NTG LowCode Platform NTGapps NTG LowCode Platform
NTGapps NTG LowCode Platform
Mustafa Kuğu365 views

MapReduce

  • 1. Big Data and Hadoop Training MapReduce
  • 2. Page 2Classification: Restricted Agenda • Meet MapReduce • Word Count Algorithm – Traditional approach • Traditional approach on a Distributed System • Traditional approach – Drawbacks • MapReduce Approach • Input & Output Forms of a MR program • Map, Shuffle & Sort, Reduce Phase • WordCount Code walkthrough • Workflow & Transformation of Data • Input Split & HDFS Block • Relation between Split & Block • Data locality Optimization • Speculative Execution • MR Flow with Single Reduce Task • MR flow with multiple Reducers • Input Format & Hierarchy • Output Format & Hierarchy
  • 3. Page 3Classification: Restricted Meet MapReduce • MapReduce is a programming model for distributed processing • Advantage - easy scaling of data processing over multiple computing nodes • The basic entities in this model are – mappers & reducers • Decomposing a data processing application into mappers and reducers is the task of developer • once you write an application in the MapReduce form, scaling the application to run over hundreds, thousands, or even tens of thousands of machines in a cluster is merely a configuration change
  • 4. Page 4Classification: Restricted WordCount – Traditional Approach • Input: do as I say not as I do • Output: Word Count as 2 do 2 I 2 not 1 say 1
  • 5. Page 5Classification: Restricted WordCount – Traditional Approach define wordCount as Multiset; for each document in documentSet { T = tokenize(document); for each token in T { wordCount[token]++; } } display(wordCount);
  • 6. Page 6Classification: Restricted Traditional Approach – Distributed Processing define wordCount as Multiset; for each document in documentSubset { < same code as in perv.slide> } sendToSecondPhase(wordCount); define totalWordCount as Multiset; for each wordCount received from firstPhase { multisetAdd (totalWordCount, wordCount); }
  • 7. Page 7Classification: Restricted Traditional Approach – Drawbacks •Central Storage – bottleneck in bandwidth of the server •Multiple Storage – handling splits •Program runs in memory •When processing large document sets, the number of unique words can exceed the RAM storage of a machine •Phase 2 handling by one machine? •If Multiple machines are used for phase-2, how to partition the data?
  • 8. Page 8Classification: Restricted Mapreduce Approach • Has two execution phases – mapping & reducing • These phases are defined by data processing functions called – mapper & reducer • Mapping phase – MR takes the input data and feeds each data element to the mapper • Reducing phase – reducer processes all the outputs from the mapper and arrives at a final result
  • 9. Page 9Classification: Restricted Input & Output forms: • In order for mapping, reducing, partitioning, and shuffling (and a few others that were not mentioned) to seamlessly work together, we need to agree on a common structure for the data being processed • InputFormat class is responsible for creating input splits and dividing them into records() Input Output map() <k1, v1> list(<k2, v2>) reduce() <k2, list(v2)> list(<k3, v3>)
  • 13. Page 13Classification: Restricted MR - Work flow & Transformation of data From i/p files to the mapper From the Mapper to the intermediate results From intermediate results to the reducer From the reducer to output files
  • 15. Page 15Classification: Restricted Input Split & Hdfs Block Data Chunk HDFS Block (Physical Division) Input Split (Logical Division)
  • 16. Page 16Classification: Restricted Relation Between Input Split & Hdfs Block 1 2 3 4 76 8 1095 File Line s Block Boundary Block Boundary Block Boundary Block Boundary Split Split Split • Logical records do not fit neatly into the HDFS blocks. • Logical records are lines that cross the boundary of the blocks. • First split contains line 5 although it spans across blocks.
  • 17. Page 17Classification: Restricted Data locality Optimization • MR job is split into various map & reduce tasks • Map tasks run on the input splits • Ideally, the task JVM would be initiated in the node where the split/block of data exists • While in some scenarios, JVMs might not be free to accept another task. • In that case, Task Tracker will be initiated at a different location. • Scenario a) Same node execution • Scenario b) Off-node execution • Scenario c) Off-rack execution
  • 18. Page 18Classification: Restricted Speculative execution • MR job is split into various map & reduce tasks and they get executed in parallel. • Overall job execution time is pulled down by the slowest task. • Hadoop doesn’t try to diagnose and fix slow-running tasks; instead, it tries to detect when a task is running slower than expected and launches another equivalent task as a backup. This is termed speculative execution of tasks.
  • 19. Page 19Classification: Restricted MapReduce Dataflow With A Single Reduce Task
  • 20. Page 20Classification: Restricted Map Reduce Dataflow With Multiple Reduce Tasks
  • 21. Page 21Classification: Restricted MapReduce Dataflow With No Reduce Tasks
  • 22. Page 22Classification: Restricted Combiner • A combiner is a mini-reducer • It gets executed on the mapper output at the mapper side • Combiner’s output is fed to Reducer • As the mapper output is further refined using combiner, data that has to be shuffled across the cluster is minimized • Because the combiner function is an optimization, Hadoop does not provide a guarantee of how many times it will call it for a particular map output record, if at all • So, calling the combiner function zero, one, or many times should produce the same output from the reducer.
  • 23. Page 23Classification: Restricted Combiner’s Contract • Only those functions that obey commutative & associative properties can use combiners. • Because max(0, 20, 10, 25, 15) = max(max(0, 20, 10), max(25, 15)) = max(20, 25) = 25 where as, mean(0, 20, 10, 25, 15) = 14 and mean(mean(0, 20, 10), mean(25, 15)) = mean(10, 20) = 15
  • 24. Page 24Classification: Restricted Partitioner • We know that a unique key will always go to a unique reducer. • Partitioner is responsible for sending key, value pairs to a reducer based on the key content. • The default partitioner is Hash-partitioner. It takes mapper output, create a Hash value for each key and divide it modulo by the number of reducers. The output of this calculation will determine the reducer that this particular key would go to
  • 27. Page 27Classification: Restricted InputFormat Input Split Input SplitInput SplitInput Split Record Reader Record Reader Record Reader Record Reader Mapper MapperMapperMapper
  • 28. Page 28Classification: Restricted OutputFormat Reducer Output File Reducer ReducerReducer RcordWriter RcordWriterRcordWriterRcordWriter Output FileOutput FileOutput File
  • 30. Page 30Classification: Restricted Counters • Counters are a useful channel for gathering statistics about the job: for quality control or for application-level statistics. • Often used for debugging purpose. • eg: Count number of Good records, bad records in the input • Two types – Built-in & Custom Counters • Examples of Built-in Counters: • Map input records • Map output records • Filesystem bytes read • Launched map tasks • Failed map tasks • Killed reduce tasks
  • 31. Page 31Classification: Restricted Joins • Map-side join(Replication): A map-side join that works in situations where one of the datasets is small enough to cache • Reduce-side join(Repartition join): A reduce-side join for situations where you’re joining two or more large datasets together • Semi-join(A map-side join): Another map-side join where one dataset is initially too large to fit into memory, but after some filtering can be reduced down to a size that can fit in memory
  • 32. Page 32Classification: Restricted Distributed Cache • Side data can be defined as extra read-only data needed by a job to process the main dataset • To make side data available to all map or reduce tasks, we distribute those datasets using Hadoop’s Distributed Cache mechanism. pavan.hadoop@outlook.com
  • 33. Page 33Classification: Restricted Map Join (Using Distributed Cache)
  • 34. Page 34Classification: Restricted Some Useful Links: • http://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html • http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client- core/MapReduceTutorial.html