SlideShare ist ein Scribd-Unternehmen logo
1 von 45
Downloaden Sie, um offline zu lesen
Spark Performance
Past, Future, and Present
Kay Ousterhout
Joint work with Christopher Canel, Ryan Rasti,
Sylvia Ratnasamy, Scott Shenker, Byung-Gon
Chun
About Me
Apache Spark PMC Member
Recent PhD graduate from UC Berkeley
Thesis work on performance of large-scale data analytics
Co-founder at Kelda (kelda.io)
How can I make
this faster?
How can I make
this faster?
Should I use a
different cloud
instance type?
Should I trade
more CPU for less
I/O by using
better
compression?
How can I make
this faster?
???
How can I make
this faster?
???
How can I make
this faster?
???
Major performance improvements
possible via tuning, configuration
…if only you knew which knobs to turn
Past: Performance instrumentation in Spark
Future: New architecture that provides performance clarity
Present: Improving Spark’s performance instrumentation
This talk
spark.textFile(“hdfs://…”) 
.flatMap(lambda l: l.split(“ “)) 
.map(lambda w: (w, 1)) 
.reduceByKey(lambda a, b: a + b) 
.saveAsTextFile(“hdfs://…”)
Example Spark Job
Split input file into words
and emit count of 1 for each
Word Count:
Example Spark Job
Split input file into words
and emit count of 1 for each
Word Count:
For each word, combine the
counts, and save the output
spark.textFile(“hdfs://…”) 
.flatMap(lambda l: l.split(“ “)) 
.map(lambda w: (w, 1)) 
.reduceByKey(lambda a, b: a + b) 
.saveAsTextFile(“hdfs://…”)
spark.textFile(“hdfs://…”)
.flatMap(lambda l: l.split(“ “))
.map(lambda w: (w, 1))
Map Stage: Split input file into words
and emit count of 1 for each
Reduce Stage: For each word, combine
the counts, and save the output
Spark Word Count Job:
.reduceByKey(lambda a, b: a + b)
.saveAsTextFile(“hdfs://…”)
…	
Worker 1
Worker n
Tasks
…	
Worker 1
Worker n
Spark Word Count Job:
Reduce Stage: For each word, combine
the counts, and save the output
.reduceByKey(lambda a, b: a + b)
.saveAsTextFile(“hdfs://…”)
…	
Worker 1
Worker n
compute
network
time
(1) Request a few
shuffle blocks
disk
(5) Continue fetching
remote data
: time to handle one shuffle block
(2) Process local
data
What happens in a reduce task?
(4) Process data fetched remotely
(3) Write output to disk
compute
network
time
disk
: time to handle one shuffle block
What happens in a reduce task?
Bottlenecked on
network and disk
Bottlenecked on network
Bottlenecked on
CPU
compute
network
time
disk
: time to handle one shuffle block
What happens in a reduce task?
Bottlenecked on
network and disk
Bottlenecked on network
Bottlenecked on
CPU
compute
network
time
disk
What instrumentation exists today?
Instrumentation centered on single, main task thread
: shuffle read blocked time
: executor
computing time (!)
actual
What instrumentation exists today?
timeline version
What instrumentation exists today?
Instrumentation centered on single, main task thread
Shuffle read and shuffle write blocked time
Input read and output write blocked time not instrumented
Possible to add!
compute
disk
Instrumenting read and write time
Process shuffle block This is a lie
compute
Reality:
Spark processes and then writes one record at a time
Most writes get buffered Occasionally the buffer is flushed
compute
Spark processes and then writes one record at a time
Most writes get buffered Occasionally the buffer is flushed
Challenges with reality:
Record-level instrumentation is too high overhead
Spark doesn’t know when buffers get flushed
(HDFS does!)
Tasks use fine-grained pipelining to parallelize
resources
Instrumented times are blocked times only
(task is doing other things in background)
Opportunities to improve instrumentation
Past: Performance instrumentation in Spark
Future: New architecture that provides performance clarity
Present: Improving Spark’s performance instrumentation
This talk
Task 1
Task 2
Task 5
Task 3
Task 4
Task 7
Task 6
Task 8
time
4 concurrent tasks
on a worker
Task 1
Task 2
Task 5
Task 3
Task 4
Task 7
Task 6
Task 8
time
Concurrent tasks may
contend for
the same resource
(e.g., network)
What’s the bottleneck?
Task 1
Task 2
Task 5
Task 3
Task 4
Task 7
Task 6
Task 8
Time t: different
tasks may be
bottlenecked on
different resources
Single task may be
bottlenecked on
different resources
at different times
Task 1
Task 2
Task 5
Task 3
Task 4
Task 7
Task 6
Task 8
How much faster
would my job be with
2x disk throughput?
How would runtimes for these
disk writes change?
How would that change timing of
(and contention for) other resources?
Today: tasks use pipelining to parallelize
multiple resources
Proposal: build systems using monotasks
that each consume just one resource
Monotasks: Each task uses one resource
Network
monotask Disk monotask
Compute
monotask
Today’s task:
Monotasks don’t start until all dependencies complete
Task 1
Network read
CPU
Disk write
Dedicated schedulers control contention
Network
scheduler
CPU scheduler:
1 monotask / core
Disk drive scheduler:
1 monotask / disk
Monotasks for one of today’s tasks:
Spark today:
Tasks have non-
uniform resource
use
4 multi-resource
tasks run
concurrently
Single-resource
monotasks
scheduled by
per-resource
schedulers
Monotasks:
API-compatible, performance
parity with Spark
Performance telemetry trivial!
How much faster would job run if...
4x more machines
Input stored in-memory
No disk read
No CPU time to deserialize
Flash drives instead of disks
Faster shuffle read/write time 10x improvement predicted
with at most 23% error
�
���
���
���
���
����
����
����
����
��� ����� �������� �� ����� �� �����
����������
��������
�������� ������� �� ��������� ���� ���� ������
��������� ��� ������� ��� ��������� ���� ���� ������
������ ��� ������� ��� ��������� ���� ���� ������
Monotasks: Break jobs into single-resource tasks
Using single-resource monotasks provides clarity
without sacrificing performance
Massive change to Spark internals (>20K lines of code)
Past: Performance instrumentation in Spark
Future: New architecture that provides performance clarity
Present: Improving Spark’s performance instrumentation
This talk
Spark today:
Task resource use
changes at fine
time granularity
4 multi-resource
tasks run
concurrently
Monotasks:
Single-resource tasks lead
to complete, trivial
performance metrics
Can we get monotask-like per-resource metrics for each
task today?
compute
network
time
disk
Can we get per-task resource use?
Measure machine
resource utilization?
Task 1
Task 2
Task 5
Task 3
Task 4
Task 7
Task 6
Task 8
time
4 concurrent tasks
on a worker
Task 1
Task 2
Task 5
Task 3
Task 4
Task 7
Task 6
Task 8
time
Concurrent tasks may
contend for
the same resource
(e.g., network)
Contention controlled by
lower layers (e.g.,
operating system)
compute
network
time
disk
Can we get per-task resource use?
Machine utilization
includes other tasks
Can’t directly measure
per-task I/O:
in background, mixed
with other tasks
compute
network
time
disk
Can we get per-task resource use?
Existing metrics: total
data read
How long did it take?
Use machine utilization
metrics to get
bandwidth!
Existing per-task I/O counters (e.g., shuffle bytes read)
+
Machine-level utilization (and bandwidth) metrics
=
Complete metrics about time spent using each resource
Goal: provide performance clarity
Only way to improve performance is to know what to speed up
Why do we care about performance clarity?
Typical performance eval:
group of experts Practical performance: 1 novice
Goal: provide performance clarity
Only way to improve performance is to know what to speed up
Some instrumentation exists already
Focuses on blocked times in the main task thread
Many opportunities to improve instrumentation
(1) Add read/write instrumentation to lower level (e.g., HDFS)
(2) Add machine-level utilization info
(3) Calculate per-resource time
More details at kayousterhout.org

Weitere ähnliche Inhalte

Was ist angesagt?

Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan ZhuBuilding a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Databricks
 
Speeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
Speeding Up Spark with Data Compression on Xeon+FPGA with David OjikaSpeeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
Speeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
Databricks
 
Optimal Strategies for Large Scale Batch ETL Jobs with Emma Tang
Optimal Strategies for Large Scale Batch ETL Jobs with Emma TangOptimal Strategies for Large Scale Batch ETL Jobs with Emma Tang
Optimal Strategies for Large Scale Batch ETL Jobs with Emma Tang
Databricks
 
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Databricks
 

Was ist angesagt? (20)

Re-Architecting Spark For Performance Understandability
Re-Architecting Spark For Performance UnderstandabilityRe-Architecting Spark For Performance Understandability
Re-Architecting Spark For Performance Understandability
 
A Developer’s View into Spark's Memory Model with Wenchen Fan
A Developer’s View into Spark's Memory Model with Wenchen FanA Developer’s View into Spark's Memory Model with Wenchen Fan
A Developer’s View into Spark's Memory Model with Wenchen Fan
 
Parallelize R Code Using Apache Spark
Parallelize R Code Using Apache Spark Parallelize R Code Using Apache Spark
Parallelize R Code Using Apache Spark
 
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
 
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
700 Updatable Queries Per Second: Spark as a Real-Time Web Service700 Updatable Queries Per Second: Spark as a Real-Time Web Service
700 Updatable Queries Per Second: Spark as a Real-Time Web Service
 
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan ZhuBuilding a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
 
Accelerating Apache Spark by Several Orders of Magnitude with GPUs and RAPIDS...
Accelerating Apache Spark by Several Orders of Magnitude with GPUs and RAPIDS...Accelerating Apache Spark by Several Orders of Magnitude with GPUs and RAPIDS...
Accelerating Apache Spark by Several Orders of Magnitude with GPUs and RAPIDS...
 
Robust and Scalable ETL over Cloud Storage with Apache Spark
Robust and Scalable ETL over Cloud Storage with Apache SparkRobust and Scalable ETL over Cloud Storage with Apache Spark
Robust and Scalable ETL over Cloud Storage with Apache Spark
 
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
 
Apache Spark Core – Practical Optimization
Apache Spark Core – Practical OptimizationApache Spark Core – Practical Optimization
Apache Spark Core – Practical Optimization
 
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUsScaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
 
How To Connect Spark To Your Own Datasource
How To Connect Spark To Your Own DatasourceHow To Connect Spark To Your Own Datasource
How To Connect Spark To Your Own Datasource
 
Speeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
Speeding Up Spark with Data Compression on Xeon+FPGA with David OjikaSpeeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
Speeding Up Spark with Data Compression on Xeon+FPGA with David Ojika
 
Optimal Strategies for Large Scale Batch ETL Jobs with Emma Tang
Optimal Strategies for Large Scale Batch ETL Jobs with Emma TangOptimal Strategies for Large Scale Batch ETL Jobs with Emma Tang
Optimal Strategies for Large Scale Batch ETL Jobs with Emma Tang
 
Spark Summit 2016: Connecting Python to the Spark Ecosystem
Spark Summit 2016: Connecting Python to the Spark EcosystemSpark Summit 2016: Connecting Python to the Spark Ecosystem
Spark Summit 2016: Connecting Python to the Spark Ecosystem
 
Using Spark with Tachyon by Gene Pang
Using Spark with Tachyon by Gene PangUsing Spark with Tachyon by Gene Pang
Using Spark with Tachyon by Gene Pang
 
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Writing Continuous Applications with Structured Streaming Python APIs in Apac...
Writing Continuous Applications with Structured Streaming Python APIs in Apac...
 
Spark Community Update - Spark Summit San Francisco 2015
Spark Community Update - Spark Summit San Francisco 2015Spark Community Update - Spark Summit San Francisco 2015
Spark Community Update - Spark Summit San Francisco 2015
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
 
Apache Spark Performance is too hard. Let's make it easier
Apache Spark Performance is too hard. Let's make it easierApache Spark Performance is too hard. Let's make it easier
Apache Spark Performance is too hard. Let's make it easier
 

Ähnlich wie Apache Spark Performance: Past, Future and Present

Google Cloud Computing on Google Developer 2008 Day
Google Cloud Computing on Google Developer 2008 DayGoogle Cloud Computing on Google Developer 2008 Day
Google Cloud Computing on Google Developer 2008 Day
programmermag
 

Ähnlich wie Apache Spark Performance: Past, Future and Present (20)

Re-Architecting Spark For Performance Understandability
Re-Architecting Spark For Performance UnderstandabilityRe-Architecting Spark For Performance Understandability
Re-Architecting Spark For Performance Understandability
 
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
Making Sense of Spark Performance-(Kay Ousterhout, UC Berkeley)
 
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
 
Google Cloud Computing on Google Developer 2008 Day
Google Cloud Computing on Google Developer 2008 DayGoogle Cloud Computing on Google Developer 2008 Day
Google Cloud Computing on Google Developer 2008 Day
 
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data PlatformsCassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
Cassandra Summit 2014: Apache Spark - The SDK for All Big Data Platforms
 
How Many Slaves (Ukoug)
How Many Slaves (Ukoug)How Many Slaves (Ukoug)
How Many Slaves (Ukoug)
 
Spark Overview and Performance Issues
Spark Overview and Performance IssuesSpark Overview and Performance Issues
Spark Overview and Performance Issues
 
Tachyon-2014-11-21-amp-camp5
Tachyon-2014-11-21-amp-camp5Tachyon-2014-11-21-amp-camp5
Tachyon-2014-11-21-amp-camp5
 
Spark Summit EU talk by Sameer Agarwal
Spark Summit EU talk by Sameer AgarwalSpark Summit EU talk by Sameer Agarwal
Spark Summit EU talk by Sameer Agarwal
 
Seastar at Linux Foundation Collaboration Summit
Seastar at Linux Foundation Collaboration SummitSeastar at Linux Foundation Collaboration Summit
Seastar at Linux Foundation Collaboration Summit
 
High Performance With Java
High Performance With JavaHigh Performance With Java
High Performance With Java
 
Leveraging Databricks for Spark Pipelines
Leveraging Databricks for Spark PipelinesLeveraging Databricks for Spark Pipelines
Leveraging Databricks for Spark Pipelines
 
Leveraging Databricks for Spark pipelines
Leveraging Databricks for Spark pipelinesLeveraging Databricks for Spark pipelines
Leveraging Databricks for Spark pipelines
 
Fundamentals of Stream Processing with Apache Beam, Tyler Akidau, Frances Perry
Fundamentals of Stream Processing with Apache Beam, Tyler Akidau, Frances Perry Fundamentals of Stream Processing with Apache Beam, Tyler Akidau, Frances Perry
Fundamentals of Stream Processing with Apache Beam, Tyler Akidau, Frances Perry
 
Ai meetup Neural machine translation updated
Ai meetup Neural machine translation updatedAi meetup Neural machine translation updated
Ai meetup Neural machine translation updated
 
Software and the Concurrency Revolution : Notes
Software and the Concurrency Revolution : NotesSoftware and the Concurrency Revolution : Notes
Software and the Concurrency Revolution : Notes
 
Gears of Perforce: AAA Game Development Challenges
Gears of Perforce: AAA Game Development ChallengesGears of Perforce: AAA Game Development Challenges
Gears of Perforce: AAA Game Development Challenges
 
introduction to data processing using Hadoop and Pig
introduction to data processing using Hadoop and Pigintroduction to data processing using Hadoop and Pig
introduction to data processing using Hadoop and Pig
 
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
The Proto-Burst Buffer: Experience with the flash-based file system on SDSC's...
 
Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2 Introduction to Apache Spark :: Lagos Scala Meetup session 2
Introduction to Apache Spark :: Lagos Scala Meetup session 2
 

Mehr von Databricks

Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 

Mehr von Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 

Kürzlich hochgeladen

%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
masabamasaba
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
masabamasaba
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 

Kürzlich hochgeladen (20)

WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
 
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With SimplicityWSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
WSO2Con2024 - Enabling Transactional System's Exponential Growth With Simplicity
 
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park %in ivory park+277-882-255-28 abortion pills for sale in ivory park
%in ivory park+277-882-255-28 abortion pills for sale in ivory park
 
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
WSO2CON 2024 - Cloud Native Middleware: Domain-Driven Design, Cell-Based Arch...
 
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
 
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
 
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
WSO2Con2024 - From Code To Cloud: Fast Track Your Cloud Native Journey with C...
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
 
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
 
WSO2CON 2024 Slides - Open Source to SaaS
WSO2CON 2024 Slides - Open Source to SaaSWSO2CON 2024 Slides - Open Source to SaaS
WSO2CON 2024 Slides - Open Source to SaaS
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
MarTech Trend 2024 Book : Marketing Technology Trends (2024 Edition) How Data...
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
Devoxx UK 2024 - Going serverless with Quarkus, GraalVM native images and AWS...
 
What Goes Wrong with Language Definitions and How to Improve the Situation
What Goes Wrong with Language Definitions and How to Improve the SituationWhat Goes Wrong with Language Definitions and How to Improve the Situation
What Goes Wrong with Language Definitions and How to Improve the Situation
 

Apache Spark Performance: Past, Future and Present

  • 1. Spark Performance Past, Future, and Present Kay Ousterhout Joint work with Christopher Canel, Ryan Rasti, Sylvia Ratnasamy, Scott Shenker, Byung-Gon Chun
  • 2. About Me Apache Spark PMC Member Recent PhD graduate from UC Berkeley Thesis work on performance of large-scale data analytics Co-founder at Kelda (kelda.io)
  • 3. How can I make this faster?
  • 4. How can I make this faster?
  • 5. Should I use a different cloud instance type?
  • 6. Should I trade more CPU for less I/O by using better compression?
  • 7. How can I make this faster? ???
  • 8. How can I make this faster? ???
  • 9. How can I make this faster? ???
  • 10. Major performance improvements possible via tuning, configuration …if only you knew which knobs to turn
  • 11. Past: Performance instrumentation in Spark Future: New architecture that provides performance clarity Present: Improving Spark’s performance instrumentation This talk
  • 12. spark.textFile(“hdfs://…”) .flatMap(lambda l: l.split(“ “)) .map(lambda w: (w, 1)) .reduceByKey(lambda a, b: a + b) .saveAsTextFile(“hdfs://…”) Example Spark Job Split input file into words and emit count of 1 for each Word Count:
  • 13. Example Spark Job Split input file into words and emit count of 1 for each Word Count: For each word, combine the counts, and save the output spark.textFile(“hdfs://…”) .flatMap(lambda l: l.split(“ “)) .map(lambda w: (w, 1)) .reduceByKey(lambda a, b: a + b) .saveAsTextFile(“hdfs://…”)
  • 14. spark.textFile(“hdfs://…”) .flatMap(lambda l: l.split(“ “)) .map(lambda w: (w, 1)) Map Stage: Split input file into words and emit count of 1 for each Reduce Stage: For each word, combine the counts, and save the output Spark Word Count Job: .reduceByKey(lambda a, b: a + b) .saveAsTextFile(“hdfs://…”) … Worker 1 Worker n Tasks … Worker 1 Worker n
  • 15. Spark Word Count Job: Reduce Stage: For each word, combine the counts, and save the output .reduceByKey(lambda a, b: a + b) .saveAsTextFile(“hdfs://…”) … Worker 1 Worker n
  • 16. compute network time (1) Request a few shuffle blocks disk (5) Continue fetching remote data : time to handle one shuffle block (2) Process local data What happens in a reduce task? (4) Process data fetched remotely (3) Write output to disk
  • 17. compute network time disk : time to handle one shuffle block What happens in a reduce task? Bottlenecked on network and disk Bottlenecked on network Bottlenecked on CPU
  • 18. compute network time disk : time to handle one shuffle block What happens in a reduce task? Bottlenecked on network and disk Bottlenecked on network Bottlenecked on CPU
  • 19. compute network time disk What instrumentation exists today? Instrumentation centered on single, main task thread : shuffle read blocked time : executor computing time (!)
  • 20. actual What instrumentation exists today? timeline version
  • 21. What instrumentation exists today? Instrumentation centered on single, main task thread Shuffle read and shuffle write blocked time Input read and output write blocked time not instrumented Possible to add!
  • 22. compute disk Instrumenting read and write time Process shuffle block This is a lie compute Reality: Spark processes and then writes one record at a time Most writes get buffered Occasionally the buffer is flushed
  • 23. compute Spark processes and then writes one record at a time Most writes get buffered Occasionally the buffer is flushed Challenges with reality: Record-level instrumentation is too high overhead Spark doesn’t know when buffers get flushed (HDFS does!)
  • 24. Tasks use fine-grained pipelining to parallelize resources Instrumented times are blocked times only (task is doing other things in background) Opportunities to improve instrumentation
  • 25. Past: Performance instrumentation in Spark Future: New architecture that provides performance clarity Present: Improving Spark’s performance instrumentation This talk
  • 26. Task 1 Task 2 Task 5 Task 3 Task 4 Task 7 Task 6 Task 8 time 4 concurrent tasks on a worker
  • 27. Task 1 Task 2 Task 5 Task 3 Task 4 Task 7 Task 6 Task 8 time Concurrent tasks may contend for the same resource (e.g., network)
  • 28. What’s the bottleneck? Task 1 Task 2 Task 5 Task 3 Task 4 Task 7 Task 6 Task 8 Time t: different tasks may be bottlenecked on different resources Single task may be bottlenecked on different resources at different times
  • 29. Task 1 Task 2 Task 5 Task 3 Task 4 Task 7 Task 6 Task 8 How much faster would my job be with 2x disk throughput? How would runtimes for these disk writes change? How would that change timing of (and contention for) other resources?
  • 30. Today: tasks use pipelining to parallelize multiple resources Proposal: build systems using monotasks that each consume just one resource
  • 31. Monotasks: Each task uses one resource Network monotask Disk monotask Compute monotask Today’s task: Monotasks don’t start until all dependencies complete Task 1 Network read CPU Disk write
  • 32. Dedicated schedulers control contention Network scheduler CPU scheduler: 1 monotask / core Disk drive scheduler: 1 monotask / disk Monotasks for one of today’s tasks:
  • 33. Spark today: Tasks have non- uniform resource use 4 multi-resource tasks run concurrently Single-resource monotasks scheduled by per-resource schedulers Monotasks: API-compatible, performance parity with Spark Performance telemetry trivial!
  • 34. How much faster would job run if... 4x more machines Input stored in-memory No disk read No CPU time to deserialize Flash drives instead of disks Faster shuffle read/write time 10x improvement predicted with at most 23% error � ��� ��� ��� ��� ���� ���� ���� ���� ��� ����� �������� �� ����� �� ����� ���������� �������� �������� ������� �� ��������� ���� ���� ������ ��������� ��� ������� ��� ��������� ���� ���� ������ ������ ��� ������� ��� ��������� ���� ���� ������
  • 35. Monotasks: Break jobs into single-resource tasks Using single-resource monotasks provides clarity without sacrificing performance Massive change to Spark internals (>20K lines of code)
  • 36. Past: Performance instrumentation in Spark Future: New architecture that provides performance clarity Present: Improving Spark’s performance instrumentation This talk
  • 37. Spark today: Task resource use changes at fine time granularity 4 multi-resource tasks run concurrently Monotasks: Single-resource tasks lead to complete, trivial performance metrics Can we get monotask-like per-resource metrics for each task today?
  • 38. compute network time disk Can we get per-task resource use? Measure machine resource utilization?
  • 39. Task 1 Task 2 Task 5 Task 3 Task 4 Task 7 Task 6 Task 8 time 4 concurrent tasks on a worker
  • 40. Task 1 Task 2 Task 5 Task 3 Task 4 Task 7 Task 6 Task 8 time Concurrent tasks may contend for the same resource (e.g., network) Contention controlled by lower layers (e.g., operating system)
  • 41. compute network time disk Can we get per-task resource use? Machine utilization includes other tasks Can’t directly measure per-task I/O: in background, mixed with other tasks
  • 42. compute network time disk Can we get per-task resource use? Existing metrics: total data read How long did it take? Use machine utilization metrics to get bandwidth!
  • 43. Existing per-task I/O counters (e.g., shuffle bytes read) + Machine-level utilization (and bandwidth) metrics = Complete metrics about time spent using each resource
  • 44. Goal: provide performance clarity Only way to improve performance is to know what to speed up Why do we care about performance clarity? Typical performance eval: group of experts Practical performance: 1 novice
  • 45. Goal: provide performance clarity Only way to improve performance is to know what to speed up Some instrumentation exists already Focuses on blocked times in the main task thread Many opportunities to improve instrumentation (1) Add read/write instrumentation to lower level (e.g., HDFS) (2) Add machine-level utilization info (3) Calculate per-resource time More details at kayousterhout.org