This document discusses custom memory management techniques in Spark that provide performance benefits over the standard JVM approach. It covers how Spark uses unsafe off-heap memory allocation, fixed-width serialization, and just-in-time code generation to process data more efficiently in memory. These techniques allow Spark to avoid object overhead, reduce garbage collection costs, and optimize aggregation operations. Measurements show they can significantly reduce processing time and garbage collection overhead compared to the standard JVM approach.
Call me @ 9892124323 Cheap Rate Call Girls in Vashi with Real Photo 100% Secure
Anatomy of in-memory processing in Spark
1. Anatomy of in-memory
processing in Spark
A deep-dive into custom memory management in Spark
https://github.com/shashankgowdal/introduction_to_dataset
2. ● Shashank L
● Big data consultant and trainer at
datamantra.io
● www.shashankgowda.com
3. Agenda
● Era of in-memory processing
● Big data frameworks on JVM
● JVM memory model
● Custom memory management
● Allocation
● Serialization
● Processing
● Benefits of these on Spark
4. Era of in-memory processing
● After Spark, in memory has become a defacto
standard for big data workloads
● Advancement in hardware is pushing more
frameworks in that direction
● Memory management is coupled with runtime of the
framework
● Using memory efficiently in a big data workload is a
challenging task
● Memory management depends upon runtime of the
framework
5. Why JVM is a prominent runtime for
big data workloads
● Managed runtime
● Portable
● Hadoop was on JVM
● Rich eco system
6. Big data frameworks on JVM
● Many frameworks today runs on JVM today
○ Spark
○ Flink
○ Hadoop
○ etc
● Organising data in memory
○ In-memory processing
○ In-memory caching of intermediate results
● Memory management influences
○ Resource efficiency
○ Performance
7. Straight-forward approach
● JVM memory model approach
● Store collection of objects and perform any processing on
the collection.
● Advantages
○ Eases development cycle
○ Built in safety checks before modifying any of the memory
○ Reduces complexity
○ JVM built-in GC
8. JVM memory model - Disadvantages
● Predicting memory consumption is hard
○ If it fails, OutOfMemory error kills the JVM
● High garbage collection overhead
○ Easily 50% of the time spent in GC
● Objects have space overhead
○ JVM objects doesn’t take the same amount of memory as
we think.
9. Java object overhead
● Consider a string “abcd” as a JVM object. By looking at it, it
should take up 4 bytes (one per character) of memory.
10. Garbage collection challenges
● Many big data workloads create objects in a way that are
unfriendly to regular Java GC.
● Young generation garbage collection is frequent.
● Objects created in big data workloads tend to live in Young
generation itself because they are used few times.
11. Generality has a cost, so
semantics and schema should
be used to exploit specificity
instead
12. Custom memory management
● Allocation - Allocate fixed number of segments upfront.
● Serialization - Data objects are serialized into memory
segments
● Processing - Implement algorithms on binary
representation.
14. Managing memory on our own
● sun.misc.Unsafe
● Directly manipulating memory without safety checks.
(hence, its unsafe)
● This API is used to build data structures off heap in
Spark.
15. sun.misc.Unsafe
● Unsafe is one of the gateway to low level
programming in Java.
● Exposes C-style memory access
● explicit allocation, deallocation, pointer arithmetics
● Unsafe methods are intrinsic
18. Custom memory management in Spark
On heap
● Stores data inside an array of type Long
● Capable of storing 16GB at once
● Bytes are encoded in long and stored here
Off heap
● Allocates memory in the memory assigned to JVM other than
heap
● Uses Unsafe API
● Stores bytes directly
19. Encoding memory addresses
● Off heap: Addresses are raw memory pointers.
● On heap: Addresses are base object + offset pairs
● Spark uses its own page table abstraction to enable more
compact encoding of on-heap addresses.
22. Java object-based row notation
● 3 fields of type (int, string, string)
with value (123, “data”,”mantra”)
➔ 5+ objects
➔ high space overhead
➔ expensive hashCode()
23. Tungsten’s unsafe row format
● Bitset for tracking null values
● Every column appears in the fixed-length value region
○ Fixed length variables are inclined
○ For variable length values, we store a relative offset
into the variable length data section
● Rows are always 8 byte aligned
● Equality comparison can be done on raw bytes.
29. Many big data workloads are now
compute bound
● Network optimizations can only reduce job completion time by a median of
at most 2%.
● Optimizing or eliminating disk accesses can only reduce job completion
time by a median of at most 19%
● [1]
31. Why is CPU the new bottleneck
● Hardware has improved
○ 1Gbps to 10Gbps link in networks
○ High B/W SSDs or Stripped HDD arrays
● Spark IO has been optimized
○ Many workloads now avoid significant disk IO by pruning
data that is not needed in a given job
○ New shuffle and network layer implementations
● Data formats have improved
○ Parquet, binary data formats
● Serialization and Hashing are CPU-bound bottlenecks
32. Code generation
● Generic evaluation of expression logic is very expensive
on the JVM
○ Virtual function calls
○ Branches based on expression type
○ Object creation due to primitive boxing
○ Memory consumption by boxed primitive objects
● Generating the code which directly applies the expression
logic on serialized data
33. Which Spark API can be benefited
Spark dataframes
SparkSQL
RDD
34. Why only Dataframes are benefited?
Python
DF
Java/Scala
DF
R
DF
Logical
Plan
Physical
execution
Catalyst
optimizer
Spark
SQL
Physical
execution
RDD
API