SlideShare ist ein Scribd-Unternehmen logo
1 von 31
Downloaden Sie, um offline zu lesen
Robert Xue
Software Engineer, Groupon
CONTINUOUS APPLICATION
WITH FAIR SCHEDULER
About me
Groupon Online Defense
• Malicious behavior
detection
• A log stream processing
engine
Tech stack
• Kafka
• Storm
• Spark on Yarn
• In-house hardware
Software engineer, Behavioral Modeling team
Scheduler
• Code name: SchedulableBuilder
• FIFO and FAIR.
• Update spark.scheduler.mode to switch
Job pool scheduling mode
• Code name SchedulingAlgorithm
• FIFO and FAIR, applies to FAIR
scheduler only
• Update fairscheduler.xml to decide
Application
• Created by spark-submit
Job
• A group of tasks
• Unit of work to be submitted
Task
• Unit of work to be scheduled
Glossary
What does a scheduler do?
• Determine who can use the resources
• Tweak to optimize
‣ Total execution time
‣ Resource utilization
‣ Total CPU seconds
When does your scheduler work?
• Only when you have more than one job
submitted to spark context
• Writing your rdd operations line by line != multiple
jobs submitted at the same time
Range(0, 24).foreach { i =>
sc.textFile(s”file-$i").count()
}
Run - 1
Sequentially submitting 24 file reading jobs
• 4 parts, ~500MB files * 24
• 8 cores
• 110 seconds
Range(0, 24).foreach { i =>
sc.textFile(s”file-$i").count()
}
Node_local - Rack_local - Any - Unused
• 8 * 110 * 11.33% = 99.68
cpu seconds
• ~20 individual jobs clearly
visible
Submit your job in parallel
• Java Runnable
• Python threading
• Scala Future
• Scalaz Task
• Monix Task
Range(0, 24).foreach { i =>
sc.textFile(s”file-$i").count()
}
Range(0, 24).par.foreach { i =>
sc.textFile(s”file-$i").count()
}
Run - 2
Parallel submit & FIFO(24)
• 8 cores
• 110 22 seconds
• 8 * 22 * 82.04% =
99.68144.32 cpu
seconds
• 015 seconds 100%
utilization period
Range(0, 24).par.foreach { i =>
sc.textFile(s”file-$i").count()
}
Node_local - Rack_local - Any - Unused
Turn on FAIR scheduler
Provide --conf spark.scheduler.mode=FAIR in your
spark submit…
… it is that simple
Run - 3
Parallel submit & FAIR(24)
• 8 cores
• 22 15 seconds
• 8 * 15 * 73.74% =
144.3288.48record
cpu
seconds
• BadGreat locality
Node_local - Rack_local - Any - Unused
Range(0, 24).par.foreach { i =>
sc.textFile(s”file-$i").count()
}
Tweak your locality config
Provide --conf spark.locality.wait=1s in your spark
submit…
• spark.locality.wait.process/node/rack available as
well
• default is 3s
Run - 4
Parallel submit & FAIR(24) & locality.wait=1s
• 8 cores
• 15 12record
seconds
• 8 * 12 * 97.64% =
88.4893.68 cpu
seconds
Node_local - Rack_local - Any - Unused
Range(0, 24).par.foreach { i =>
sc.textFile(s”file-$i").count()
}
Summary
Cluster
utilization
Execution time Locality
tweaking
Miscellaneous
Sequential Poor Long Hard Default
behavior in
Spark-shell
Parallel {FIFO} Good Short Hard Default
behavior in
Notebooks
Parallel {FAIR} Good Short Easy
Standalone applications
Stream, Batch, Adhoc query, submitted from multiple apps
Read data
Model
Execution
Stream
Process
Reporting
Continuous application
Stream, Batch, Adhoc query, parallel submitted in one app
Read data ◼
Model
Execution ◼
Stream
Process ◼
Reporting ◼
Multiple standalone applications Continuous application
Sharing
context
Share file content using database,
file system, network, etc
Simply pass RDD reference
around
Resource
allocation
Static; configure more cores than
typical load to handle peak traffic
Dynamic; less important tasks yield
CPU critical tasks
Job
scheduling
Crontab, oozie, airflow… — all
approaches leads to spark-submit,
a typically 20s - 120s overhead
Spark-submit once and only once
Continuous Application
At t=2.5, SeriesA and B finished processing stream input at t=1.0
A round of model execution had been triggered at t=2.5
Model was triggered using all available data at that moment
At t=3.1, app received stream input at t=3.0
SeriesA was still processing input at t=2.0
SeriesB started processing the new input.
At t=3.6, SeriesA finished processing stream input at t=2.0
SeriesA started processing stream input at t=3.0
Model had a pending execution triggered at t=3.6
At t=4.3, Series A and B finished processing stream input at t=3.0
Model had a pending execution triggered at t=4.3
Model’s execution at t=3.6 was cancelled
At t=4.6, model finished it’s execution at t=2.5
Series B finished processing stream input at t=4.0
Model started execution that was triggered at t=4.3
Model used data that was available at t=4.6 as input
Example:
4 models executing in parallel from a 16 cores app
Practices for Continuous Application
Decouple stream processing from batch processing
• Batch interval is merely your minimum resolution
Emphasis tuning for streaming part
• Assign to a scheduler pool with minimum core guarantee
Execute only the latest batch invocation
• Assign to a scheduler pool with high weight
Reporting and query onsite
• Assign to a scheduler pool with low weight
Summary
Coding
• Share data by passing RDD/DStream/DF/DS
• Always launch jobs in a separate thread
• No complex logic in the streaming operation
• Push batch job invocation into a queue
• Execute only the latest batch job
App submission
• Turn on FAIR scheduler mode
• Provide fairscheduler.xml
• Turn off stream back pressure for Streaming
app
• Turn off dynamic allocation for Streaming app
• Turn on dynamic allocation for long-lived batch
app
Job bootstrapping
• sc.setJobGroup(“group”, “description”)
• sc.setLocalProperty("spark.scheduler.pool",
“pool”)
• rdd.setName(“my data at t=0”).persist()
Packaging
• Multiple logic, one repo, one jar
• Batch app can be long-lived as well
• Replace crontab with continuous app + REST
Tuning
• Understand your job’s opportunity cost
• Tweak spark.locality.wait parameters
• Config cores that yields acceptable SLA with
good resource utilization
Thank You.
Robxue Xue, Software Engineer, Groupon
rxue@groupon.com | @roboxue
Tool used in this deck: groupon/sparklint
Open sourced on github since Spark Summit EU 2016
Q & A
Why do you need a scheduler - 3
Parallelly submitting 24 file reading jobs - FIFO(4)
• 8 cores
• 22 27 seconds
• 8 * 27 * 46.74% =
99.68100.88 cpu
seconds
• BadImproved locality
implicit val ec =
ExecutionContext.fromExecutor(
new ForkJoinPool(4))
Range(0, 24).foreach(i=> Future{
sc.textFile(s”file-$i”).count()
})
Node_local - Rack_local - Any - Unused
Why do you need a scheduler - 4
Parallelly submitting 24 file reading jobs - FAIR(4)
• 8 cores
• 22 32 seconds
• 8 * 32 * 40.92% =
99.68104.72 cpu
seconds
• BadGreat locality
// --conf spark.scheduler.mode=FAIR
implicit val ec =
ExecutionContext.fromExecutor(
new ForkJoinPool(4))
Range(0, 24).foreach(i=> Future{
sc.textFile(s”file-$i”).count()
}) Node_local - Rack_local - Any - Unused
Back up:
locality settings matters
spark.scheduler.mode=FAIR
spark.locality.wait=500ms
spark.scheduler.mode=FAIR
spark.locality.wait=20s
Node_local - Rack_local - Any - Unused
Backup:
Scheduler summary
Sequential job submit Parallel job submit
Under Parallelized
Parallel job submit
Perfect Parallelized
Parallel job submit
Over Parallelized /
Poor locality settings
FIFO scheduler Under-utilized cluster
Good locality
Low core usage
Long execution time
Under-utilized cluster
Poor locality
High core usage
Short execution time
Well-utilized cluster
Poor locality
High core usage
Short execution time Well-utilized cluster
Poor locality
High core usage
Long execution time
FAIR scheduler N/A Under-utilized cluster
Good locality
Low core usage
Short execution time
Well-utilized cluster
Good locality
Low core usage
Short execution time
Node_local - Rack_local - Any - Unused
Backup:
What’s wrong with Dynamic Allocation in streaming
… if you are analyzing time series data / “tensor” …
- Streaming app, always up. Workloads comes periodically and sequentially.
- Core usage graph has a saw-tooth pattern
- Less likely to return executors, if your batch interval <
executorIdleTimeout
- Dynamic allocation is off if executor has cache on it
- “by default executors containing cached data are never removed”
- Dynamic allocation == “Resource blackhole” ++ “poor utilization”

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Spark shuffle introduction
Spark shuffle introductionSpark shuffle introduction
Spark shuffle introduction
 
Improving Apache Spark Downscaling
 Improving Apache Spark Downscaling Improving Apache Spark Downscaling
Improving Apache Spark Downscaling
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
Optimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL JoinsOptimizing Apache Spark SQL Joins
Optimizing Apache Spark SQL Joins
 
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital KediaTuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
Tuning Apache Spark for Large-Scale Workloads Gaoxiang Liu and Sital Kedia
 
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
 
Cosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle ServiceCosco: An Efficient Facebook-Scale Shuffle Service
Cosco: An Efficient Facebook-Scale Shuffle Service
 
Introduction to apache spark
Introduction to apache spark Introduction to apache spark
Introduction to apache spark
 
Apache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper OptimizationApache Spark Core—Deep Dive—Proper Optimization
Apache Spark Core—Deep Dive—Proper Optimization
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Why your Spark Job is Failing
Why your Spark Job is FailingWhy your Spark Job is Failing
Why your Spark Job is Failing
 
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
Extreme Apache Spark: how in 3 months we created a pipeline that can process ...
 
Deep Dive into the New Features of Apache Spark 3.0
Deep Dive into the New Features of Apache Spark 3.0Deep Dive into the New Features of Apache Spark 3.0
Deep Dive into the New Features of Apache Spark 3.0
 
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
 
Dynamic Partition Pruning in Apache Spark
Dynamic Partition Pruning in Apache SparkDynamic Partition Pruning in Apache Spark
Dynamic Partition Pruning in Apache Spark
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
A Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQLA Deep Dive into Query Execution Engine of Spark SQL
A Deep Dive into Query Execution Engine of Spark SQL
 

Ähnlich wie Continuous Application with FAIR Scheduler with Robert Xue

OGG Architecture Performance
OGG Architecture PerformanceOGG Architecture Performance
OGG Architecture Performance
Enkitec
 
Oracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture PerformanceOracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture Performance
Enkitec
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Kristofferson A
 

Ähnlich wie Continuous Application with FAIR Scheduler with Robert Xue (20)

Groovy concurrency
Groovy concurrencyGroovy concurrency
Groovy concurrency
 
Concurrency
ConcurrencyConcurrency
Concurrency
 
Benchmarking Solr Performance at Scale
Benchmarking Solr Performance at ScaleBenchmarking Solr Performance at Scale
Benchmarking Solr Performance at Scale
 
Spark Summit EU talk by Sital Kedia
Spark Summit EU talk by Sital KediaSpark Summit EU talk by Sital Kedia
Spark Summit EU talk by Sital Kedia
 
Scaling ingest pipelines with high performance computing principles - Rajiv K...
Scaling ingest pipelines with high performance computing principles - Rajiv K...Scaling ingest pipelines with high performance computing principles - Rajiv K...
Scaling ingest pipelines with high performance computing principles - Rajiv K...
 
Ingesting hdfs intosolrusingsparktrimmed
Ingesting hdfs intosolrusingsparktrimmedIngesting hdfs intosolrusingsparktrimmed
Ingesting hdfs intosolrusingsparktrimmed
 
Performance van Java 8 en verder - Jeroen Borgers
Performance van Java 8 en verder - Jeroen BorgersPerformance van Java 8 en verder - Jeroen Borgers
Performance van Java 8 en verder - Jeroen Borgers
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
 
Async and parallel patterns and application design - TechDays2013 NL
Async and parallel patterns and application design - TechDays2013 NLAsync and parallel patterns and application design - TechDays2013 NL
Async and parallel patterns and application design - TechDays2013 NL
 
OGG Architecture Performance
OGG Architecture PerformanceOGG Architecture Performance
OGG Architecture Performance
 
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
Oracle GoldenGate Presentation from OTN Virtual Technology Summit - 7/9/14 (PDF)
 
Oracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture PerformanceOracle GoldenGate Architecture Performance
Oracle GoldenGate Architecture Performance
 
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACPerformance Scenario: Diagnosing and resolving sudden slow down on two node RAC
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
 
Building source code level profiler for C++.pdf
Building source code level profiler for C++.pdfBuilding source code level profiler for C++.pdf
Building source code level profiler for C++.pdf
 
.NET Multithreading/Multitasking
.NET Multithreading/Multitasking.NET Multithreading/Multitasking
.NET Multithreading/Multitasking
 
Stream processing from single node to a cluster
Stream processing from single node to a clusterStream processing from single node to a cluster
Stream processing from single node to a cluster
 
Shooting the Rapids
Shooting the RapidsShooting the Rapids
Shooting the Rapids
 
Performance Benchmarking: Tips, Tricks, and Lessons Learned
Performance Benchmarking: Tips, Tricks, and Lessons LearnedPerformance Benchmarking: Tips, Tricks, and Lessons Learned
Performance Benchmarking: Tips, Tricks, and Lessons Learned
 
Speed up UDFs with GPUs using the RAPIDS Accelerator
Speed up UDFs with GPUs using the RAPIDS AcceleratorSpeed up UDFs with GPUs using the RAPIDS Accelerator
Speed up UDFs with GPUs using the RAPIDS Accelerator
 
Spark Summit EU talk by Luca Canali
Spark Summit EU talk by Luca CanaliSpark Summit EU talk by Luca Canali
Spark Summit EU talk by Luca Canali
 

Mehr von Databricks

Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Jeeves Grows Up: An AI Chatbot for Performance and Quality
Jeeves Grows Up: An AI Chatbot for Performance and QualityJeeves Grows Up: An AI Chatbot for Performance and Quality
Jeeves Grows Up: An AI Chatbot for Performance and Quality
Databricks
 

Mehr von Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 
Machine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack DetectionMachine Learning CI/CD for Email Attack Detection
Machine Learning CI/CD for Email Attack Detection
 
Jeeves Grows Up: An AI Chatbot for Performance and Quality
Jeeves Grows Up: An AI Chatbot for Performance and QualityJeeves Grows Up: An AI Chatbot for Performance and Quality
Jeeves Grows Up: An AI Chatbot for Performance and Quality
 

Kürzlich hochgeladen

CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
amitlee9823
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
MarinCaroMartnezBerg
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
shivangimorya083
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
shivangimorya083
 

Kürzlich hochgeladen (20)

CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
 
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfMarket Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% SecureCall me @ 9892124323  Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts ServiceCall Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
Call Girls In Shalimar Bagh ( Delhi) 9953330565 Escorts Service
 
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
Best VIP Call Girls Noida Sector 39 Call Me: 8448380779
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFx
 
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
Best VIP Call Girls Noida Sector 22 Call Me: 8448380779
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls Punjabi Bagh 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
Data-Analysis for Chicago Crime Data 2023
Data-Analysis for Chicago Crime Data  2023Data-Analysis for Chicago Crime Data  2023
Data-Analysis for Chicago Crime Data 2023
 
Ravak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxRavak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptx
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
Introduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptxIntroduction-to-Machine-Learning (1).pptx
Introduction-to-Machine-Learning (1).pptx
 
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 

Continuous Application with FAIR Scheduler with Robert Xue

  • 1. Robert Xue Software Engineer, Groupon CONTINUOUS APPLICATION WITH FAIR SCHEDULER
  • 2. About me Groupon Online Defense • Malicious behavior detection • A log stream processing engine Tech stack • Kafka • Storm • Spark on Yarn • In-house hardware Software engineer, Behavioral Modeling team
  • 3. Scheduler • Code name: SchedulableBuilder • FIFO and FAIR. • Update spark.scheduler.mode to switch Job pool scheduling mode • Code name SchedulingAlgorithm • FIFO and FAIR, applies to FAIR scheduler only • Update fairscheduler.xml to decide Application • Created by spark-submit Job • A group of tasks • Unit of work to be submitted Task • Unit of work to be scheduled Glossary
  • 4. What does a scheduler do? • Determine who can use the resources • Tweak to optimize ‣ Total execution time ‣ Resource utilization ‣ Total CPU seconds
  • 5. When does your scheduler work? • Only when you have more than one job submitted to spark context • Writing your rdd operations line by line != multiple jobs submitted at the same time Range(0, 24).foreach { i => sc.textFile(s”file-$i").count() }
  • 6. Run - 1 Sequentially submitting 24 file reading jobs • 4 parts, ~500MB files * 24 • 8 cores • 110 seconds Range(0, 24).foreach { i => sc.textFile(s”file-$i").count() } Node_local - Rack_local - Any - Unused • 8 * 110 * 11.33% = 99.68 cpu seconds • ~20 individual jobs clearly visible
  • 7. Submit your job in parallel • Java Runnable • Python threading • Scala Future • Scalaz Task • Monix Task Range(0, 24).foreach { i => sc.textFile(s”file-$i").count() } Range(0, 24).par.foreach { i => sc.textFile(s”file-$i").count() }
  • 8. Run - 2 Parallel submit & FIFO(24) • 8 cores • 110 22 seconds • 8 * 22 * 82.04% = 99.68144.32 cpu seconds • 015 seconds 100% utilization period Range(0, 24).par.foreach { i => sc.textFile(s”file-$i").count() } Node_local - Rack_local - Any - Unused
  • 9. Turn on FAIR scheduler Provide --conf spark.scheduler.mode=FAIR in your spark submit… … it is that simple
  • 10. Run - 3 Parallel submit & FAIR(24) • 8 cores • 22 15 seconds • 8 * 15 * 73.74% = 144.3288.48record cpu seconds • BadGreat locality Node_local - Rack_local - Any - Unused Range(0, 24).par.foreach { i => sc.textFile(s”file-$i").count() }
  • 11. Tweak your locality config Provide --conf spark.locality.wait=1s in your spark submit… • spark.locality.wait.process/node/rack available as well • default is 3s
  • 12. Run - 4 Parallel submit & FAIR(24) & locality.wait=1s • 8 cores • 15 12record seconds • 8 * 12 * 97.64% = 88.4893.68 cpu seconds Node_local - Rack_local - Any - Unused Range(0, 24).par.foreach { i => sc.textFile(s”file-$i").count() }
  • 13. Summary Cluster utilization Execution time Locality tweaking Miscellaneous Sequential Poor Long Hard Default behavior in Spark-shell Parallel {FIFO} Good Short Hard Default behavior in Notebooks Parallel {FAIR} Good Short Easy
  • 14. Standalone applications Stream, Batch, Adhoc query, submitted from multiple apps Read data Model Execution Stream Process Reporting
  • 15. Continuous application Stream, Batch, Adhoc query, parallel submitted in one app Read data ◼ Model Execution ◼ Stream Process ◼ Reporting ◼
  • 16. Multiple standalone applications Continuous application Sharing context Share file content using database, file system, network, etc Simply pass RDD reference around Resource allocation Static; configure more cores than typical load to handle peak traffic Dynamic; less important tasks yield CPU critical tasks Job scheduling Crontab, oozie, airflow… — all approaches leads to spark-submit, a typically 20s - 120s overhead Spark-submit once and only once Continuous Application
  • 17. At t=2.5, SeriesA and B finished processing stream input at t=1.0 A round of model execution had been triggered at t=2.5 Model was triggered using all available data at that moment
  • 18. At t=3.1, app received stream input at t=3.0 SeriesA was still processing input at t=2.0 SeriesB started processing the new input.
  • 19. At t=3.6, SeriesA finished processing stream input at t=2.0 SeriesA started processing stream input at t=3.0 Model had a pending execution triggered at t=3.6
  • 20. At t=4.3, Series A and B finished processing stream input at t=3.0 Model had a pending execution triggered at t=4.3 Model’s execution at t=3.6 was cancelled
  • 21. At t=4.6, model finished it’s execution at t=2.5 Series B finished processing stream input at t=4.0 Model started execution that was triggered at t=4.3 Model used data that was available at t=4.6 as input
  • 22. Example: 4 models executing in parallel from a 16 cores app
  • 23. Practices for Continuous Application Decouple stream processing from batch processing • Batch interval is merely your minimum resolution Emphasis tuning for streaming part • Assign to a scheduler pool with minimum core guarantee Execute only the latest batch invocation • Assign to a scheduler pool with high weight Reporting and query onsite • Assign to a scheduler pool with low weight
  • 24. Summary Coding • Share data by passing RDD/DStream/DF/DS • Always launch jobs in a separate thread • No complex logic in the streaming operation • Push batch job invocation into a queue • Execute only the latest batch job App submission • Turn on FAIR scheduler mode • Provide fairscheduler.xml • Turn off stream back pressure for Streaming app • Turn off dynamic allocation for Streaming app • Turn on dynamic allocation for long-lived batch app Job bootstrapping • sc.setJobGroup(“group”, “description”) • sc.setLocalProperty("spark.scheduler.pool", “pool”) • rdd.setName(“my data at t=0”).persist() Packaging • Multiple logic, one repo, one jar • Batch app can be long-lived as well • Replace crontab with continuous app + REST Tuning • Understand your job’s opportunity cost • Tweak spark.locality.wait parameters • Config cores that yields acceptable SLA with good resource utilization
  • 25. Thank You. Robxue Xue, Software Engineer, Groupon rxue@groupon.com | @roboxue Tool used in this deck: groupon/sparklint Open sourced on github since Spark Summit EU 2016
  • 26. Q & A
  • 27. Why do you need a scheduler - 3 Parallelly submitting 24 file reading jobs - FIFO(4) • 8 cores • 22 27 seconds • 8 * 27 * 46.74% = 99.68100.88 cpu seconds • BadImproved locality implicit val ec = ExecutionContext.fromExecutor( new ForkJoinPool(4)) Range(0, 24).foreach(i=> Future{ sc.textFile(s”file-$i”).count() }) Node_local - Rack_local - Any - Unused
  • 28. Why do you need a scheduler - 4 Parallelly submitting 24 file reading jobs - FAIR(4) • 8 cores • 22 32 seconds • 8 * 32 * 40.92% = 99.68104.72 cpu seconds • BadGreat locality // --conf spark.scheduler.mode=FAIR implicit val ec = ExecutionContext.fromExecutor( new ForkJoinPool(4)) Range(0, 24).foreach(i=> Future{ sc.textFile(s”file-$i”).count() }) Node_local - Rack_local - Any - Unused
  • 29. Back up: locality settings matters spark.scheduler.mode=FAIR spark.locality.wait=500ms spark.scheduler.mode=FAIR spark.locality.wait=20s Node_local - Rack_local - Any - Unused
  • 30. Backup: Scheduler summary Sequential job submit Parallel job submit Under Parallelized Parallel job submit Perfect Parallelized Parallel job submit Over Parallelized / Poor locality settings FIFO scheduler Under-utilized cluster Good locality Low core usage Long execution time Under-utilized cluster Poor locality High core usage Short execution time Well-utilized cluster Poor locality High core usage Short execution time Well-utilized cluster Poor locality High core usage Long execution time FAIR scheduler N/A Under-utilized cluster Good locality Low core usage Short execution time Well-utilized cluster Good locality Low core usage Short execution time Node_local - Rack_local - Any - Unused
  • 31. Backup: What’s wrong with Dynamic Allocation in streaming … if you are analyzing time series data / “tensor” … - Streaming app, always up. Workloads comes periodically and sequentially. - Core usage graph has a saw-tooth pattern - Less likely to return executors, if your batch interval < executorIdleTimeout - Dynamic allocation is off if executor has cache on it - “by default executors containing cached data are never removed” - Dynamic allocation == “Resource blackhole” ++ “poor utilization”