SlideShare ist ein Scribd-Unternehmen logo
1 von 32
Downloaden Sie, um offline zu lesen
© 2015 IBM Corporation
A Java Implementer's Guide to
Better Apache Spark
Performance
Tim Ellison
IBM Runtimes Team, Hursley, UK
tellison
@tpellison
© 2016 IBM Corporation2
Apache Spark is a fast, general
purpose cluster computing platform
© 2016 IBM Corporation3
SQL Streaming
Machine
Learning Graph
Core
Data
Frames
Machine Learning
Pipelines
© 2016 IBM Corporation4
Apache Spark APIs
 Spark Core
– Provides APIs for working with raw data collections
– Map / reduce functions to transform and evaluate the data
– Filter, aggregation, grouping, joins, sorting
 Spark SQL
– APIs for working with structured and semi-structured data
– Loads data from a variety of sources (DB2, JSON, Parquet, etc)
– Provides SQL interface to external tools (JDBC/ODBC)
 Spark Streaming
– Discretized streams of data arriving over time
– Fault tolerant and long running tasks
– Integrates with batch processing of data
 Machine Learning (MLlib)
– Efficient, iterative algorithms across distributed datasets
– Focus on parallel algorithms that run well on clusters
– Relatively low-level (e.g. K-means, alternating least squares)
 Graph Computation (GraphX)
– View the same data as graph or collection-based
– Transform and join graphs to manipulate data sets
– PageRank, Label propagation, strongly connected, triangle count, ...
© 2016 IBM Corporation5
Cluster Computing Platform
 Master Node “the driver”
Evaluates user operations
– Creates a physical execution plan to obtain the final result (a “job”)
– Works backwards to determine what individual “tasks” are required to
produce the answer
– Optimizes the required tasks using pipelining for parallelizable tasks,
reusing intermediate results, including persisting temporary states, etc
(“stages of the job”)
– Distributes work out to worker nodes
– Tracks the location of data and tasks
– Deals with errant workers
 Worker Nodes “the executors” in a cluster
Executes tasks
– Receives a copy of the application code
– Receives data, or the location of data partitions
– Performs the required operation
– Writes output to another input, or storage
driver
job job job
executor
task task task
executor
task task task
© 2016 IBM Corporation6
Resilient Distributed Dataset
 The Resilient Distributed Dataset (RDD) is the target of program operations
 Conceptually, one large collection of all your data elements – can be huge!
 Can be the original input data, or intermediate results from other operations
 In the Spark implementation, RDDs are:
– Further decomposed into partitions
– Persisted in memory or on disk
– Fault tolerant
– Lazily evaluated
– Have a concept of location optimization
RDD1
derived
from
partitions
RDD1
partition1
RDD1
partition 2
RDD1
partition 1
RDD1
partition 3
RDD1
partition n...
f(x)
partitioner +
preferred location
© 2016 IBM Corporation7
Performance of the Apache Spark Runtime Core
 Moving data blocks
– How quickly can a worker get the data needed for this task?
– How quickly can a worker persist the results if required?
 Executing tasks
– How quickly can a worker sort, compute, transform, … the data in this partition?
– Can a fast worker work-steal or run speculative tasks?
“Narrow” RDD dependencies e.g. map()
pipeline-able
“Wide” RDD dependencies e.g. reduce()
shuffles
RDD1
partition1
RDD1
partition 2
RDD1
partition 1
RDD1
partition 3
RDD1
partition n...
RDD1
partition1
RDD2
partition 2
RDD2
partition 1
RDD2
partition 3
RDD2
partition n...
RDD1
partition1
RDD3
partition 2
RDD3
partition 1
RDD3
partition 3
RDD3
partition n...
RDD1
partition1
RDD1
partition 2
RDD1
partition 1
RDD1
partition 3
RDD1
partition n...
RDD1
partition1
RDD2
partition 2
RDD2
partition 1
© 2016 IBM Corporation8
A few things we can do with the JVM to enhance the performance of
Apache Spark!
1) JIT compiler enhancements, and writing JIT-friendly code
2) Improving the object serializer
3) Faster IO – networking and storage
4) Offloading tasks to graphics co-processors (GPUs)
© 2016 IBM Corporation9
JIT compiler enhancements, and writing JIT-friendly code
© 2016 IBM Corporation10
JNI calls are not free!
https://github.com/xerial/snappy­java/blob/develop/src/main/java/org/xerial/snappy/SnappyNative.cpp
© 2016 IBM Corporation11
Style: Using JNI has an impact...
 The cost of calling from Java code to natives and from natives to Java code is significantly
higher (maybe 5x longer) than a normal Java method call.
– The JIT can't in-line native methods.
– The JIT can't do data flow analysis into JNI calls
• e.g. it has to assume that all parameters are always used.
– The JIT has to set up the call stack and parameters for C calling convention,
• i.e. maybe rearranging items on the stack.
 JNI can introduce additional data copying costs
– There's no guarantee that you will get a direct pointer to the array / string with
Get<type>ArrayElements(), even when using the GetPrimitiveArrayCritical
versions.
– The IBM JVM will always return a copy (to allow GC to continue).
 Tip:
– JNI natives are more expensive than plain Java calls.
– e.g. create an unsafe based Snappy-like package written in Java code so that JNI cost is
eliminated.
© 2016 IBM Corporation12
Style: Use JIT optimizations to reduce overhead of logging checks
 Tip: Check for the non-null value of a static field ref to instance of a logging class singleton
– e.g.
– Uses the JIT's speculative optimization to avoid the explicit test for logging being enabled;
instead it ...
1)Generates an internal JIT runtime assumption (e.g. InfoLogger.class is undefined),
2)NOPs the test for trace enablement
3)Uses a class initialization hook for the InfoLogger.class (already necessary for instantiating the class)
4)The JIT will regenerate the test code if the class event is fired
 Spark's logging calls are gated on the checks of a static boolean value
trait Logging
Spark
© 2016 IBM Corporation13
Style: Judicious use of polymorphism
 Spark has a number of highly polymorphic interface call sites and high fan-in (several calling contexts
invoking the same callee method) in map, reduce, filter, flatMap, ...
– e.g. ExternalSorter.insertAll is very hot (drains an iterator using hasNext/next calls)
 Pattern #1:
– InterruptibleIterator → Scala's mapIterator → Scala's filterIterator → …
 Pattern #2:
– InterruptibleIterator → Scala's filterIterator → Scala's mapIterator → …
 The JIT can only choose one pattern to in-line!
– Makes JIT devirtualization and speculation more risky; using profiling information from a different
context could lead to incorrect devirtualization.
– More conservative speculation, or good phase change detection and recovery are needed in the JIT
compiler to avoid getting it wrong.
 Lambdas and functions as arguments, by definition, introduce different code flow targets
– Passing in widely implemented interfaces produce many different bytecode sequences
– When we in-line we have to put runtime checks ahead of in-lined method bodies to make sure we are
going to run the right method!
– Often specialized classes are used only in a very limited number of places, but the majority of the code
does not use these classes and pays a heavy penalty
– e.g. Scala's attempt to specialize Tuple2 Int argument does more harm than good!
 Tip: Use polymorphism sparingly, use the same order / patterns for nested & wrappered code, and
keep call sites homogeneous.
© 2016 IBM Corporation14
Effect of Adjusting JIT heuristics for Apache Spark
IBM JDK8 SR3
(tuned)
IBM JDK8 SR3
(out of the box)
PageRank 160% 148%
Sleep 101% 113%
Sort 103% 147%
WordCount 130% 146%
Bayes 100% 91%
Terasort 157% 131%
Geometric
mean
121% 116%
1/Geometric mean of HiBench time on zLinux 32 cores, 25G heap
Improvements in successive IBM Java 8 releases Performance compared with OpenJDK 8
HiBench huge, Spark 1.5.2, Linux Power8 12 core * 8-way SMT
1.35x
© 2016 IBM Corporation15
Replacing the object serializer
© 2016 IBM Corporation16
Writing a Spark-friendly object serializer
 Spark has a plug-in architecture for flattening objects to storage
– Typically uses general purpose serializers, e.g. Java serializer, or Kryo, etc.
 Can we optimize for Spark usage?
– Goal: Reduce time time to flatten objects
– Goal: Reduce size of flattened objects
 Expanding the list of specialist serialized form
– Having custom write/read object methods allows for reduced time in reflection and smaller on-
wire payloads.
– Types such as Tuple and Some given special treatment in the serializer
 Sharing object representation within the serialized stream to reduce payload
– But may be defeated if supportsRelocationOfSerializedObjects required
 Reduce the payload size further using variable length encoding of primitive types.
– All objects are eventually decomposed into primitives
© 2016 IBM Corporation17
Writing a Spark-friendly object serializer
 Adaptive stack-based recursive serialization vs. state machine serialization
– Use the stack to track state wherever possible, but fall back to state machine for deeply
nested objects (e.g. big RDDs)
 Special replacement of deserialization calls to avoid stack-walking to find class loader
context
– Optimization in JIT to circumvent some regular calls to more efficient versions
 Tip: These are opaque to the application, no special patterns required.
 Results: Variable, small numbers of percentages at best
© 2016 IBM Corporation18
Faster IO – networking and storage
© 2016 IBM Corporation
Remote Direct Memory Access (RDMA) Networking
Spark VM
Buffer
Off
Heap
Buffer
Spark VM
Buffer
Off
Heap
Buffer
Ether/IB
SwitchRDMA NIC/HCA RDMA NIC/HCA
OS OS
DMA DMA
(Z-Copy) (Z-Copy)
(B-Copy)(B-Copy)
Acronyms:
Z-Copy – Zero Copy
B-Copy – Buffer Copy
IB – InfiniBand
Ether - Ethernet
NIC – Network Interface Card
HCA – Host Control Adapter
●
Low-latency, high-throughput networking
●
Direct 'application to application' memory pointer exchange between remote hosts
●
Off-load network processing to RDMA NIC/HCA – OS/Kernel Bypass (zero-copy)
●
Introduces new IO characteristics that can influence the Spark transfer plan
Spark node #1 Spark node #2
© 2016 IBM Corporation20
TCP/IP
RDMA
RDMA exhibits improved throughput and reduced latency.
Available over java.net.Socket APIs or explicit jVerbs calls
© 2016 IBM Corporation
Faster network IO with RDMA-enabled Spark
21
New dynamic transfer plan that adapts to the load and
responsiveness of the remote hosts.
New “RDMA” shuffle IO mode with lower latency and
higher throughput.
JVM-agnostic
IBM JVM only
JVM-agnostic
IBM JVM only
IBM JVM only
Block manipulation (i.e., RDD partitions)
High-level API
JVM-agnostic working prototype
with RDMA
© 2016 IBM Corporation
Shuffling data shows 30% better response time and lower
CPU utilization
© 2016 IBM Corporation23
Faster storage with POWER CAPI/Flash
 POWER8 architecture offers a 40Tb Flash drive attached
via Coherent Accelerator Processor Interface (CAPI)
– Provides simple coherent block IO APIs
– No file system overhead
 Power Service Layer (PSL)
– Performs Address Translations
– Maintains Cache
– Simple, but powerful interface to the Accelerator unit
 Coherent Accelerator Processor Proxy (CAPP)
– Maintains directory of cache lines held by Accelerator
– Snoops PowerBus on behalf of Accelerator

© 2016 IBM Corporation24
Faster disk IO with CAPI/Flash-enabled Spark
 When under memory pressure, Spark spills RDDs to disk.
– Happens in ExternalAppendOnlyMap and ExternalSorter
 We have modified Spark to spill to the high-bandwidth, coherently-attached
Flash device instead.
– Replacement for DiskBlockManager
– New FlashBlockManager handles spill to/from flash
 Making this pluggable requires some further abstraction in Spark:
– Spill code assumes using disks, and depends on DiskBlockManger
– We are spilling without using a file system layer
 Dramatically improves performance of executors under memory pressure.
 Allows to reach similar performance with much less memory (denser
deployments).
IBM Flash System 840Power8 + CAPI
© 2016 IBM Corporation25
e.g. using CAPI Flash for RDD
caching allows for 4X memory
reduction while maintaining equal
performance
© 2016 IBM Corporation26
Offloading tasks to graphics co-processors
© 2016 IBM Corporation27
GPU-enabled array sort method
IBM Power 8 with Nvidia K40m GPU
 Some Arrays.sort() methods will offload work to GPUs today
– e.g. sorting large arrays of ints
© 2016 IBM Corporation28
JIT optimized GPU acceleration
 Comes with caveats
– Recognize a limited set of operations within the lambda expressions,
• notably no object references maintained on GPU
– Default grid dimensions and operating parameters for the GPU
workload
– Redundant/pessimistic data transfer between host and device
• Not using GPU shared memory
– Limited heuristics about when to invoke the GPU and when to
generate CPU instructions
 As the JIT compiles a stream expression we can identify candidates for GPU off-loading
– Arrays copied to and from the device implicitly
– Java operations mapped to GPU kernel operations
– Preserves the standard Java syntax and semantics bytecodes
intermediate
representation
optimizer
CPU GPU
code generator
code
generator
PTX ISACPU native
© 2016 IBM Corporation29
GPU optimization of Lambda expressions
Speed-up factor when run on a GPU enabled host
IBM Power 8 with Nvidia K40m GPU
0.00
0.01
0.10
1.00
10.00
100.00
1000.00
auto-SIMD parallel forEach on CPU
parallel forEach on GPU
matrix size
The JIT can recognize parallel stream
code, and automatically compile down to
the GPU.
© 2016 IBM Corporation
Learn Predict
Moving high-level algorithms onto the GPU
Drug1 Drug2
Aspirin Gliclazide
Aspirin Dicoumarol
Drug1 Drug2 Sim
Salsalate Aspirin .9
Dicoumarol Warfarin .76
Known Interactions of type 1 to …
Drug1 Drug2 Best
Sim1*Sim1
Best
SimN*SimN
Salsalate Gliclazide .9*1 .7*1
Salsalate Warfarin .9*.76 .7*.6
Chemical Similarity
Drug1 Drug2 Prediction
Salsalate Gliclazide 0.85
Salsalate Warfarin 0.7
…
Drug1 Drug2 Prediction
Salsalate Gliclazide 0.53
Salsalate Warfarin 0.32
Logistic Regression
Model
Drug1 Drug2 Sim
Salsalate Aspirin .7
Dicoumarol Warfarin .6
Interactions
Ingest
Drug1 Drug2
Aspirin Probenecid
Aspirin Azilsartan
Interactions Prediction
© 2016 IBM Corporation
• 25X Speed up for Building Model stage (replacing Spark Mllib Logistic Regression)
• Transparent to the Spark application, but requires changes to Spark itself
© 2016 IBM Corporation32
Summary
 We are focused on Core runtime performance to get a multiplier up the Spark stack.
– More efficient code, more efficient memory usage/spilling, more efficient serialization &
networking, etc.
 There are hardware and software technologies we can bring to the party.
– We can tune the stack from hardware to high level structures for running Spark.
 Spark and Scala developers can help themselves by their style of coding.
 All the changes are being made in the Java runtime or
being pushed out to the Spark community.
 There is lots more stuff I don't have time to talk about, like GC optimizations, object layout,
monitoring VM/Spark events, hardware compression, security, etc. etc.
– mailto:
http://ibm.biz/spark­kit

Weitere ähnliche Inhalte

Was ist angesagt?

We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...
We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...
We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...mfrancis
 
WebSocket in Enterprise Applications 2015
WebSocket in Enterprise Applications 2015WebSocket in Enterprise Applications 2015
WebSocket in Enterprise Applications 2015Pavel Bucek
 
Concierge - Bringing OSGi (back) to Embedded Devices
Concierge - Bringing OSGi (back) to Embedded DevicesConcierge - Bringing OSGi (back) to Embedded Devices
Concierge - Bringing OSGi (back) to Embedded DevicesJan S. Rellermeyer
 
All of the Performance Tuning Features in Oracle SQL Developer
All of the Performance Tuning Features in Oracle SQL DeveloperAll of the Performance Tuning Features in Oracle SQL Developer
All of the Performance Tuning Features in Oracle SQL DeveloperJeff Smith
 
[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6
[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6
[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6Rakuten Group, Inc.
 
Java 8 in Anger (JavaOne)
Java 8 in Anger (JavaOne)Java 8 in Anger (JavaOne)
Java 8 in Anger (JavaOne)Trisha Gee
 
Java: Create The Future Keynote
Java: Create The Future KeynoteJava: Create The Future Keynote
Java: Create The Future KeynoteSimon Ritter
 
JavaOne 2014 BOF4241 What's Next for JSF?
JavaOne 2014 BOF4241 What's Next for JSF?JavaOne 2014 BOF4241 What's Next for JSF?
JavaOne 2014 BOF4241 What's Next for JSF?Edward Burns
 
Serverless Java: JJUG CCC 2019
Serverless Java: JJUG CCC 2019Serverless Java: JJUG CCC 2019
Serverless Java: JJUG CCC 2019Shaun Smith
 
Migrating Legacy Code
Migrating Legacy CodeMigrating Legacy Code
Migrating Legacy CodeSiddhi
 
JDK 9 Java Platform Module System
JDK 9 Java Platform Module SystemJDK 9 Java Platform Module System
JDK 9 Java Platform Module SystemWolfgang Weigend
 
Under the Hood of the Testarossa JIT Compiler
Under the Hood of the Testarossa JIT CompilerUnder the Hood of the Testarossa JIT Compiler
Under the Hood of the Testarossa JIT CompilerMark Stoodley
 
Top 10 Dying Programming Languages in 2020 | Edureka
Top 10 Dying Programming Languages in 2020 | EdurekaTop 10 Dying Programming Languages in 2020 | Edureka
Top 10 Dying Programming Languages in 2020 | EdurekaEdureka!
 
Migrating From Applets to Java Desktop Apps in JavaFX
Migrating From Applets to Java Desktop Apps in JavaFXMigrating From Applets to Java Desktop Apps in JavaFX
Migrating From Applets to Java Desktop Apps in JavaFXBruno Borges
 
JavaOne2013: Implement a High Level Parallel API - Richard Ning
JavaOne2013: Implement a High Level Parallel API - Richard NingJavaOne2013: Implement a High Level Parallel API - Richard Ning
JavaOne2013: Implement a High Level Parallel API - Richard NingChris Bailey
 
Debugging Native heap OOM - JavaOne 2013
Debugging Native heap OOM - JavaOne 2013Debugging Native heap OOM - JavaOne 2013
Debugging Native heap OOM - JavaOne 2013MattKilner
 
JavaOne2013: Securing Java in the Server Room - Tim Ellison
JavaOne2013: Securing Java in the Server Room - Tim EllisonJavaOne2013: Securing Java in the Server Room - Tim Ellison
JavaOne2013: Securing Java in the Server Room - Tim EllisonChris Bailey
 
Software development training for technical recruiters
Software development training for technical recruitersSoftware development training for technical recruiters
Software development training for technical recruitersObi Mba Ogbanufe
 
Microservices and Container
Microservices and ContainerMicroservices and Container
Microservices and ContainerWolfgang Weigend
 
Was l iberty for java batch and jsr352
Was l iberty for java batch and jsr352Was l iberty for java batch and jsr352
Was l iberty for java batch and jsr352sflynn073
 

Was ist angesagt? (20)

We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...
We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...
We Can Do Better - IBM's Vision for the Next Generation of Java Runtimes - Jo...
 
WebSocket in Enterprise Applications 2015
WebSocket in Enterprise Applications 2015WebSocket in Enterprise Applications 2015
WebSocket in Enterprise Applications 2015
 
Concierge - Bringing OSGi (back) to Embedded Devices
Concierge - Bringing OSGi (back) to Embedded DevicesConcierge - Bringing OSGi (back) to Embedded Devices
Concierge - Bringing OSGi (back) to Embedded Devices
 
All of the Performance Tuning Features in Oracle SQL Developer
All of the Performance Tuning Features in Oracle SQL DeveloperAll of the Performance Tuning Features in Oracle SQL Developer
All of the Performance Tuning Features in Oracle SQL Developer
 
[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6
[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6
[RakutenTechConf2013] [E-3] Financial Web System with Java EE 6
 
Java 8 in Anger (JavaOne)
Java 8 in Anger (JavaOne)Java 8 in Anger (JavaOne)
Java 8 in Anger (JavaOne)
 
Java: Create The Future Keynote
Java: Create The Future KeynoteJava: Create The Future Keynote
Java: Create The Future Keynote
 
JavaOne 2014 BOF4241 What's Next for JSF?
JavaOne 2014 BOF4241 What's Next for JSF?JavaOne 2014 BOF4241 What's Next for JSF?
JavaOne 2014 BOF4241 What's Next for JSF?
 
Serverless Java: JJUG CCC 2019
Serverless Java: JJUG CCC 2019Serverless Java: JJUG CCC 2019
Serverless Java: JJUG CCC 2019
 
Migrating Legacy Code
Migrating Legacy CodeMigrating Legacy Code
Migrating Legacy Code
 
JDK 9 Java Platform Module System
JDK 9 Java Platform Module SystemJDK 9 Java Platform Module System
JDK 9 Java Platform Module System
 
Under the Hood of the Testarossa JIT Compiler
Under the Hood of the Testarossa JIT CompilerUnder the Hood of the Testarossa JIT Compiler
Under the Hood of the Testarossa JIT Compiler
 
Top 10 Dying Programming Languages in 2020 | Edureka
Top 10 Dying Programming Languages in 2020 | EdurekaTop 10 Dying Programming Languages in 2020 | Edureka
Top 10 Dying Programming Languages in 2020 | Edureka
 
Migrating From Applets to Java Desktop Apps in JavaFX
Migrating From Applets to Java Desktop Apps in JavaFXMigrating From Applets to Java Desktop Apps in JavaFX
Migrating From Applets to Java Desktop Apps in JavaFX
 
JavaOne2013: Implement a High Level Parallel API - Richard Ning
JavaOne2013: Implement a High Level Parallel API - Richard NingJavaOne2013: Implement a High Level Parallel API - Richard Ning
JavaOne2013: Implement a High Level Parallel API - Richard Ning
 
Debugging Native heap OOM - JavaOne 2013
Debugging Native heap OOM - JavaOne 2013Debugging Native heap OOM - JavaOne 2013
Debugging Native heap OOM - JavaOne 2013
 
JavaOne2013: Securing Java in the Server Room - Tim Ellison
JavaOne2013: Securing Java in the Server Room - Tim EllisonJavaOne2013: Securing Java in the Server Room - Tim Ellison
JavaOne2013: Securing Java in the Server Room - Tim Ellison
 
Software development training for technical recruiters
Software development training for technical recruitersSoftware development training for technical recruiters
Software development training for technical recruiters
 
Microservices and Container
Microservices and ContainerMicroservices and Container
Microservices and Container
 
Was l iberty for java batch and jsr352
Was l iberty for java batch and jsr352Was l iberty for java batch and jsr352
Was l iberty for java batch and jsr352
 

Ähnlich wie A Java Implementer's Guide to Better Apache Spark Performance

A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.
A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.
A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.J On The Beach
 
Five cool ways the JVM can run Apache Spark faster
Five cool ways the JVM can run Apache Spark fasterFive cool ways the JVM can run Apache Spark faster
Five cool ways the JVM can run Apache Spark fasterTim Ellison
 
Apache Big Data Europe 2016
Apache Big Data Europe 2016Apache Big Data Europe 2016
Apache Big Data Europe 2016Tim Ellison
 
Apache Spark: What's under the hood
Apache Spark: What's under the hoodApache Spark: What's under the hood
Apache Spark: What's under the hoodAdarsh Pannu
 
Profiling & Testing with Spark
Profiling & Testing with SparkProfiling & Testing with Spark
Profiling & Testing with SparkRoger Rafanell Mas
 
Apache Spark Performance Observations
Apache Spark Performance ObservationsApache Spark Performance Observations
Apache Spark Performance ObservationsAdam Roberts
 
IBM Runtimes Performance Observations with Apache Spark
IBM Runtimes Performance Observations with Apache SparkIBM Runtimes Performance Observations with Apache Spark
IBM Runtimes Performance Observations with Apache SparkAdamRobertsIBM
 
Spark to Production @Windward
Spark to Production @WindwardSpark to Production @Windward
Spark to Production @WindwardDemi Ben-Ari
 
Spark 101 - First steps to distributed computing
Spark 101 - First steps to distributed computingSpark 101 - First steps to distributed computing
Spark 101 - First steps to distributed computingDemi Ben-Ari
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonBenjamin Bengfort
 
Bring the Spark To Your Eyes
Bring the Spark To Your EyesBring the Spark To Your Eyes
Bring the Spark To Your EyesDemi Ben-Ari
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsDatabricks
 
Explore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and SnappydataExplore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and SnappydataData Con LA
 
Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...
Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...
Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...Databricks
 
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14thSnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14thSnappyData
 
Programming in Spark - Lessons Learned in OpenAire project
Programming in Spark - Lessons Learned in OpenAire projectProgramming in Spark - Lessons Learned in OpenAire project
Programming in Spark - Lessons Learned in OpenAire projectŁukasz Dumiszewski
 
Yet another intro to Apache Spark
Yet another intro to Apache SparkYet another intro to Apache Spark
Yet another intro to Apache SparkSimon Lia-Jonassen
 
Build Large-Scale Data Analytics and AI Pipeline Using RayDP
Build Large-Scale Data Analytics and AI Pipeline Using RayDPBuild Large-Scale Data Analytics and AI Pipeline Using RayDP
Build Large-Scale Data Analytics and AI Pipeline Using RayDPDatabricks
 
How Java 19 Influences the Future of Your High-Scale Applications .pdf
How Java 19 Influences the Future of Your High-Scale Applications .pdfHow Java 19 Influences the Future of Your High-Scale Applications .pdf
How Java 19 Influences the Future of Your High-Scale Applications .pdfAna-Maria Mihalceanu
 

Ähnlich wie A Java Implementer's Guide to Better Apache Spark Performance (20)

A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.
A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.
A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.
 
Five cool ways the JVM can run Apache Spark faster
Five cool ways the JVM can run Apache Spark fasterFive cool ways the JVM can run Apache Spark faster
Five cool ways the JVM can run Apache Spark faster
 
Apache Big Data Europe 2016
Apache Big Data Europe 2016Apache Big Data Europe 2016
Apache Big Data Europe 2016
 
Apache Spark: What's under the hood
Apache Spark: What's under the hoodApache Spark: What's under the hood
Apache Spark: What's under the hood
 
Profiling & Testing with Spark
Profiling & Testing with SparkProfiling & Testing with Spark
Profiling & Testing with Spark
 
Apache Spark Performance Observations
Apache Spark Performance ObservationsApache Spark Performance Observations
Apache Spark Performance Observations
 
IBM Runtimes Performance Observations with Apache Spark
IBM Runtimes Performance Observations with Apache SparkIBM Runtimes Performance Observations with Apache Spark
IBM Runtimes Performance Observations with Apache Spark
 
Spark to Production @Windward
Spark to Production @WindwardSpark to Production @Windward
Spark to Production @Windward
 
Spark 101 - First steps to distributed computing
Spark 101 - First steps to distributed computingSpark 101 - First steps to distributed computing
Spark 101 - First steps to distributed computing
 
Fast Data Analytics with Spark and Python
Fast Data Analytics with Spark and PythonFast Data Analytics with Spark and Python
Fast Data Analytics with Spark and Python
 
Bring the Spark To Your Eyes
Bring the Spark To Your EyesBring the Spark To Your Eyes
Bring the Spark To Your Eyes
 
A look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutionsA look under the hood at Apache Spark's API and engine evolutions
A look under the hood at Apache Spark's API and engine evolutions
 
Explore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and SnappydataExplore big data at speed of thought with Spark 2.0 and Snappydata
Explore big data at speed of thought with Spark 2.0 and Snappydata
 
Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...
Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...
Performance Optimization Case Study: Shattering Hadoop's Sort Record with Spa...
 
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14thSnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
SnappyData Ad Analytics Use Case -- BDAM Meetup Sept 14th
 
Programming in Spark - Lessons Learned in OpenAire project
Programming in Spark - Lessons Learned in OpenAire projectProgramming in Spark - Lessons Learned in OpenAire project
Programming in Spark - Lessons Learned in OpenAire project
 
Yet another intro to Apache Spark
Yet another intro to Apache SparkYet another intro to Apache Spark
Yet another intro to Apache Spark
 
Spark 101
Spark 101Spark 101
Spark 101
 
Build Large-Scale Data Analytics and AI Pipeline Using RayDP
Build Large-Scale Data Analytics and AI Pipeline Using RayDPBuild Large-Scale Data Analytics and AI Pipeline Using RayDP
Build Large-Scale Data Analytics and AI Pipeline Using RayDP
 
How Java 19 Influences the Future of Your High-Scale Applications .pdf
How Java 19 Influences the Future of Your High-Scale Applications .pdfHow Java 19 Influences the Future of Your High-Scale Applications .pdf
How Java 19 Influences the Future of Your High-Scale Applications .pdf
 

Mehr von Tim Ellison

The Extraordinary World of Quantum Computing
The Extraordinary World of Quantum ComputingThe Extraordinary World of Quantum Computing
The Extraordinary World of Quantum ComputingTim Ellison
 
Real World Java Compatibility
Real World Java CompatibilityReal World Java Compatibility
Real World Java CompatibilityTim Ellison
 
Secure Engineering Practices for Java
Secure Engineering Practices for JavaSecure Engineering Practices for Java
Secure Engineering Practices for JavaTim Ellison
 
Securing Java in the Server Room
Securing Java in the Server RoomSecuring Java in the Server Room
Securing Java in the Server RoomTim Ellison
 
Modules all the way down: OSGi and the Java Platform Module System
Modules all the way down: OSGi and the Java Platform Module SystemModules all the way down: OSGi and the Java Platform Module System
Modules all the way down: OSGi and the Java Platform Module SystemTim Ellison
 
What's New in IBM Java 8 SE?
What's New in IBM Java 8 SE?What's New in IBM Java 8 SE?
What's New in IBM Java 8 SE?Tim Ellison
 
Using GPUs to Handle Big Data with Java
Using GPUs to Handle Big Data with JavaUsing GPUs to Handle Big Data with Java
Using GPUs to Handle Big Data with JavaTim Ellison
 

Mehr von Tim Ellison (7)

The Extraordinary World of Quantum Computing
The Extraordinary World of Quantum ComputingThe Extraordinary World of Quantum Computing
The Extraordinary World of Quantum Computing
 
Real World Java Compatibility
Real World Java CompatibilityReal World Java Compatibility
Real World Java Compatibility
 
Secure Engineering Practices for Java
Secure Engineering Practices for JavaSecure Engineering Practices for Java
Secure Engineering Practices for Java
 
Securing Java in the Server Room
Securing Java in the Server RoomSecuring Java in the Server Room
Securing Java in the Server Room
 
Modules all the way down: OSGi and the Java Platform Module System
Modules all the way down: OSGi and the Java Platform Module SystemModules all the way down: OSGi and the Java Platform Module System
Modules all the way down: OSGi and the Java Platform Module System
 
What's New in IBM Java 8 SE?
What's New in IBM Java 8 SE?What's New in IBM Java 8 SE?
What's New in IBM Java 8 SE?
 
Using GPUs to Handle Big Data with Java
Using GPUs to Handle Big Data with JavaUsing GPUs to Handle Big Data with Java
Using GPUs to Handle Big Data with Java
 

Kürzlich hochgeladen

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 

Kürzlich hochgeladen (20)

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 

A Java Implementer's Guide to Better Apache Spark Performance

  • 1. © 2015 IBM Corporation A Java Implementer's Guide to Better Apache Spark Performance Tim Ellison IBM Runtimes Team, Hursley, UK tellison @tpellison
  • 2. © 2016 IBM Corporation2 Apache Spark is a fast, general purpose cluster computing platform
  • 3. © 2016 IBM Corporation3 SQL Streaming Machine Learning Graph Core Data Frames Machine Learning Pipelines
  • 4. © 2016 IBM Corporation4 Apache Spark APIs  Spark Core – Provides APIs for working with raw data collections – Map / reduce functions to transform and evaluate the data – Filter, aggregation, grouping, joins, sorting  Spark SQL – APIs for working with structured and semi-structured data – Loads data from a variety of sources (DB2, JSON, Parquet, etc) – Provides SQL interface to external tools (JDBC/ODBC)  Spark Streaming – Discretized streams of data arriving over time – Fault tolerant and long running tasks – Integrates with batch processing of data  Machine Learning (MLlib) – Efficient, iterative algorithms across distributed datasets – Focus on parallel algorithms that run well on clusters – Relatively low-level (e.g. K-means, alternating least squares)  Graph Computation (GraphX) – View the same data as graph or collection-based – Transform and join graphs to manipulate data sets – PageRank, Label propagation, strongly connected, triangle count, ...
  • 5. © 2016 IBM Corporation5 Cluster Computing Platform  Master Node “the driver” Evaluates user operations – Creates a physical execution plan to obtain the final result (a “job”) – Works backwards to determine what individual “tasks” are required to produce the answer – Optimizes the required tasks using pipelining for parallelizable tasks, reusing intermediate results, including persisting temporary states, etc (“stages of the job”) – Distributes work out to worker nodes – Tracks the location of data and tasks – Deals with errant workers  Worker Nodes “the executors” in a cluster Executes tasks – Receives a copy of the application code – Receives data, or the location of data partitions – Performs the required operation – Writes output to another input, or storage driver job job job executor task task task executor task task task
  • 6. © 2016 IBM Corporation6 Resilient Distributed Dataset  The Resilient Distributed Dataset (RDD) is the target of program operations  Conceptually, one large collection of all your data elements – can be huge!  Can be the original input data, or intermediate results from other operations  In the Spark implementation, RDDs are: – Further decomposed into partitions – Persisted in memory or on disk – Fault tolerant – Lazily evaluated – Have a concept of location optimization RDD1 derived from partitions RDD1 partition1 RDD1 partition 2 RDD1 partition 1 RDD1 partition 3 RDD1 partition n... f(x) partitioner + preferred location
  • 7. © 2016 IBM Corporation7 Performance of the Apache Spark Runtime Core  Moving data blocks – How quickly can a worker get the data needed for this task? – How quickly can a worker persist the results if required?  Executing tasks – How quickly can a worker sort, compute, transform, … the data in this partition? – Can a fast worker work-steal or run speculative tasks? “Narrow” RDD dependencies e.g. map() pipeline-able “Wide” RDD dependencies e.g. reduce() shuffles RDD1 partition1 RDD1 partition 2 RDD1 partition 1 RDD1 partition 3 RDD1 partition n... RDD1 partition1 RDD2 partition 2 RDD2 partition 1 RDD2 partition 3 RDD2 partition n... RDD1 partition1 RDD3 partition 2 RDD3 partition 1 RDD3 partition 3 RDD3 partition n... RDD1 partition1 RDD1 partition 2 RDD1 partition 1 RDD1 partition 3 RDD1 partition n... RDD1 partition1 RDD2 partition 2 RDD2 partition 1
  • 8. © 2016 IBM Corporation8 A few things we can do with the JVM to enhance the performance of Apache Spark! 1) JIT compiler enhancements, and writing JIT-friendly code 2) Improving the object serializer 3) Faster IO – networking and storage 4) Offloading tasks to graphics co-processors (GPUs)
  • 9. © 2016 IBM Corporation9 JIT compiler enhancements, and writing JIT-friendly code
  • 10. © 2016 IBM Corporation10 JNI calls are not free! https://github.com/xerial/snappy­java/blob/develop/src/main/java/org/xerial/snappy/SnappyNative.cpp
  • 11. © 2016 IBM Corporation11 Style: Using JNI has an impact...  The cost of calling from Java code to natives and from natives to Java code is significantly higher (maybe 5x longer) than a normal Java method call. – The JIT can't in-line native methods. – The JIT can't do data flow analysis into JNI calls • e.g. it has to assume that all parameters are always used. – The JIT has to set up the call stack and parameters for C calling convention, • i.e. maybe rearranging items on the stack.  JNI can introduce additional data copying costs – There's no guarantee that you will get a direct pointer to the array / string with Get<type>ArrayElements(), even when using the GetPrimitiveArrayCritical versions. – The IBM JVM will always return a copy (to allow GC to continue).  Tip: – JNI natives are more expensive than plain Java calls. – e.g. create an unsafe based Snappy-like package written in Java code so that JNI cost is eliminated.
  • 12. © 2016 IBM Corporation12 Style: Use JIT optimizations to reduce overhead of logging checks  Tip: Check for the non-null value of a static field ref to instance of a logging class singleton – e.g. – Uses the JIT's speculative optimization to avoid the explicit test for logging being enabled; instead it ... 1)Generates an internal JIT runtime assumption (e.g. InfoLogger.class is undefined), 2)NOPs the test for trace enablement 3)Uses a class initialization hook for the InfoLogger.class (already necessary for instantiating the class) 4)The JIT will regenerate the test code if the class event is fired  Spark's logging calls are gated on the checks of a static boolean value trait Logging Spark
  • 13. © 2016 IBM Corporation13 Style: Judicious use of polymorphism  Spark has a number of highly polymorphic interface call sites and high fan-in (several calling contexts invoking the same callee method) in map, reduce, filter, flatMap, ... – e.g. ExternalSorter.insertAll is very hot (drains an iterator using hasNext/next calls)  Pattern #1: – InterruptibleIterator → Scala's mapIterator → Scala's filterIterator → …  Pattern #2: – InterruptibleIterator → Scala's filterIterator → Scala's mapIterator → …  The JIT can only choose one pattern to in-line! – Makes JIT devirtualization and speculation more risky; using profiling information from a different context could lead to incorrect devirtualization. – More conservative speculation, or good phase change detection and recovery are needed in the JIT compiler to avoid getting it wrong.  Lambdas and functions as arguments, by definition, introduce different code flow targets – Passing in widely implemented interfaces produce many different bytecode sequences – When we in-line we have to put runtime checks ahead of in-lined method bodies to make sure we are going to run the right method! – Often specialized classes are used only in a very limited number of places, but the majority of the code does not use these classes and pays a heavy penalty – e.g. Scala's attempt to specialize Tuple2 Int argument does more harm than good!  Tip: Use polymorphism sparingly, use the same order / patterns for nested & wrappered code, and keep call sites homogeneous.
  • 14. © 2016 IBM Corporation14 Effect of Adjusting JIT heuristics for Apache Spark IBM JDK8 SR3 (tuned) IBM JDK8 SR3 (out of the box) PageRank 160% 148% Sleep 101% 113% Sort 103% 147% WordCount 130% 146% Bayes 100% 91% Terasort 157% 131% Geometric mean 121% 116% 1/Geometric mean of HiBench time on zLinux 32 cores, 25G heap Improvements in successive IBM Java 8 releases Performance compared with OpenJDK 8 HiBench huge, Spark 1.5.2, Linux Power8 12 core * 8-way SMT 1.35x
  • 15. © 2016 IBM Corporation15 Replacing the object serializer
  • 16. © 2016 IBM Corporation16 Writing a Spark-friendly object serializer  Spark has a plug-in architecture for flattening objects to storage – Typically uses general purpose serializers, e.g. Java serializer, or Kryo, etc.  Can we optimize for Spark usage? – Goal: Reduce time time to flatten objects – Goal: Reduce size of flattened objects  Expanding the list of specialist serialized form – Having custom write/read object methods allows for reduced time in reflection and smaller on- wire payloads. – Types such as Tuple and Some given special treatment in the serializer  Sharing object representation within the serialized stream to reduce payload – But may be defeated if supportsRelocationOfSerializedObjects required  Reduce the payload size further using variable length encoding of primitive types. – All objects are eventually decomposed into primitives
  • 17. © 2016 IBM Corporation17 Writing a Spark-friendly object serializer  Adaptive stack-based recursive serialization vs. state machine serialization – Use the stack to track state wherever possible, but fall back to state machine for deeply nested objects (e.g. big RDDs)  Special replacement of deserialization calls to avoid stack-walking to find class loader context – Optimization in JIT to circumvent some regular calls to more efficient versions  Tip: These are opaque to the application, no special patterns required.  Results: Variable, small numbers of percentages at best
  • 18. © 2016 IBM Corporation18 Faster IO – networking and storage
  • 19. © 2016 IBM Corporation Remote Direct Memory Access (RDMA) Networking Spark VM Buffer Off Heap Buffer Spark VM Buffer Off Heap Buffer Ether/IB SwitchRDMA NIC/HCA RDMA NIC/HCA OS OS DMA DMA (Z-Copy) (Z-Copy) (B-Copy)(B-Copy) Acronyms: Z-Copy – Zero Copy B-Copy – Buffer Copy IB – InfiniBand Ether - Ethernet NIC – Network Interface Card HCA – Host Control Adapter ● Low-latency, high-throughput networking ● Direct 'application to application' memory pointer exchange between remote hosts ● Off-load network processing to RDMA NIC/HCA – OS/Kernel Bypass (zero-copy) ● Introduces new IO characteristics that can influence the Spark transfer plan Spark node #1 Spark node #2
  • 20. © 2016 IBM Corporation20 TCP/IP RDMA RDMA exhibits improved throughput and reduced latency. Available over java.net.Socket APIs or explicit jVerbs calls
  • 21. © 2016 IBM Corporation Faster network IO with RDMA-enabled Spark 21 New dynamic transfer plan that adapts to the load and responsiveness of the remote hosts. New “RDMA” shuffle IO mode with lower latency and higher throughput. JVM-agnostic IBM JVM only JVM-agnostic IBM JVM only IBM JVM only Block manipulation (i.e., RDD partitions) High-level API JVM-agnostic working prototype with RDMA
  • 22. © 2016 IBM Corporation Shuffling data shows 30% better response time and lower CPU utilization
  • 23. © 2016 IBM Corporation23 Faster storage with POWER CAPI/Flash  POWER8 architecture offers a 40Tb Flash drive attached via Coherent Accelerator Processor Interface (CAPI) – Provides simple coherent block IO APIs – No file system overhead  Power Service Layer (PSL) – Performs Address Translations – Maintains Cache – Simple, but powerful interface to the Accelerator unit  Coherent Accelerator Processor Proxy (CAPP) – Maintains directory of cache lines held by Accelerator – Snoops PowerBus on behalf of Accelerator 
  • 24. © 2016 IBM Corporation24 Faster disk IO with CAPI/Flash-enabled Spark  When under memory pressure, Spark spills RDDs to disk. – Happens in ExternalAppendOnlyMap and ExternalSorter  We have modified Spark to spill to the high-bandwidth, coherently-attached Flash device instead. – Replacement for DiskBlockManager – New FlashBlockManager handles spill to/from flash  Making this pluggable requires some further abstraction in Spark: – Spill code assumes using disks, and depends on DiskBlockManger – We are spilling without using a file system layer  Dramatically improves performance of executors under memory pressure.  Allows to reach similar performance with much less memory (denser deployments). IBM Flash System 840Power8 + CAPI
  • 25. © 2016 IBM Corporation25 e.g. using CAPI Flash for RDD caching allows for 4X memory reduction while maintaining equal performance
  • 26. © 2016 IBM Corporation26 Offloading tasks to graphics co-processors
  • 27. © 2016 IBM Corporation27 GPU-enabled array sort method IBM Power 8 with Nvidia K40m GPU  Some Arrays.sort() methods will offload work to GPUs today – e.g. sorting large arrays of ints
  • 28. © 2016 IBM Corporation28 JIT optimized GPU acceleration  Comes with caveats – Recognize a limited set of operations within the lambda expressions, • notably no object references maintained on GPU – Default grid dimensions and operating parameters for the GPU workload – Redundant/pessimistic data transfer between host and device • Not using GPU shared memory – Limited heuristics about when to invoke the GPU and when to generate CPU instructions  As the JIT compiles a stream expression we can identify candidates for GPU off-loading – Arrays copied to and from the device implicitly – Java operations mapped to GPU kernel operations – Preserves the standard Java syntax and semantics bytecodes intermediate representation optimizer CPU GPU code generator code generator PTX ISACPU native
  • 29. © 2016 IBM Corporation29 GPU optimization of Lambda expressions Speed-up factor when run on a GPU enabled host IBM Power 8 with Nvidia K40m GPU 0.00 0.01 0.10 1.00 10.00 100.00 1000.00 auto-SIMD parallel forEach on CPU parallel forEach on GPU matrix size The JIT can recognize parallel stream code, and automatically compile down to the GPU.
  • 30. © 2016 IBM Corporation Learn Predict Moving high-level algorithms onto the GPU Drug1 Drug2 Aspirin Gliclazide Aspirin Dicoumarol Drug1 Drug2 Sim Salsalate Aspirin .9 Dicoumarol Warfarin .76 Known Interactions of type 1 to … Drug1 Drug2 Best Sim1*Sim1 Best SimN*SimN Salsalate Gliclazide .9*1 .7*1 Salsalate Warfarin .9*.76 .7*.6 Chemical Similarity Drug1 Drug2 Prediction Salsalate Gliclazide 0.85 Salsalate Warfarin 0.7 … Drug1 Drug2 Prediction Salsalate Gliclazide 0.53 Salsalate Warfarin 0.32 Logistic Regression Model Drug1 Drug2 Sim Salsalate Aspirin .7 Dicoumarol Warfarin .6 Interactions Ingest Drug1 Drug2 Aspirin Probenecid Aspirin Azilsartan Interactions Prediction
  • 31. © 2016 IBM Corporation • 25X Speed up for Building Model stage (replacing Spark Mllib Logistic Regression) • Transparent to the Spark application, but requires changes to Spark itself
  • 32. © 2016 IBM Corporation32 Summary  We are focused on Core runtime performance to get a multiplier up the Spark stack. – More efficient code, more efficient memory usage/spilling, more efficient serialization & networking, etc.  There are hardware and software technologies we can bring to the party. – We can tune the stack from hardware to high level structures for running Spark.  Spark and Scala developers can help themselves by their style of coding.  All the changes are being made in the Java runtime or being pushed out to the Spark community.  There is lots more stuff I don't have time to talk about, like GC optimizations, object layout, monitoring VM/Spark events, hardware compression, security, etc. etc. – mailto: http://ibm.biz/spark­kit