+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
A look under the hood at Apache Spark's API and engine evolutions
1. A look under the hood at Apache
Spark's API and engine evolutions
Reynold Xin @rxin
2017-02-08, Amsterdam Meetup
2. About Databricks
Founded by creators of Spark
Cloud data platform
- Spark
- Interactive analysis
- Cluster management
- Production pipelines
- Data governance & security
3. Databricks Amsterdam R&D Center
Started in January
Hiring distributed systems &
database engineers!
Email me: rxin@databricks.com
9. IBM IMS hierarchical database (1966)
Image from https://stratechery.com/2016/oracles-cloudy-future/
10. Hierarchical Database
- Improvement over file system: query language & catalog
- Lack of flexibility
- Difficult to query items in different parts of the hierarchy
- Relationships are pre-determined and difficult to change
11.
12. “Future users of large data banks must be protected from having to
know how the data is organized in the machine. …
most application programs should remain unaffected when the
internal representation of data is changed and even when some
aspects of the external representation are changed.”
13. Era of relational databases (late 60s)
Two “new” important ideas
Physical Data Independence: The ability to change the physical data
layout without having to change the logical schema.
Declarative Query Language: Programmer specifies “what” rather than
“how”.
14. Why?
Business applications outlive the environments they were created in:
- New requirements might surface
- Underlying hardware might change
- Require physical layout changes (indexing, different storage medium, etc)
Enabled tremendous amount of innovation:
- Indexes, compression, column stores, etc
15. Relational Database Pros vs Cons
- Declarative and data independent
- SQL is the universal interface everybody knows
- Difficult to compose & build complex applications
- Too opinionated and inflexible
- Require data modeling before putting any data in
- SQL is the only programming language
17. Challenges Google faced
Data size growing (volume & velocity)
- Processing has to scale out over large clusters
Complexity of analysis increasing (variety)
- Massive ETL (web crawling)
- Machine learning, graph processing
18. The Big Data Problem
Semi-/Un-structured data doesn’t fit well with databases
Single machine can no longer process or even store all the data!
Only solution is to distribute general storage & processing over
clusters.
20. Data-Parallel Models
Restrict the programming interface so that the system can do more
automatically
“Here’s an operation, run it on all of the data”
- I don’t care where it runs (you schedule that)
- In fact, feel free to run it twice on different nodes
- Siimlar to “declarative programming” in databases
21.
22. MapReduce Pros vs Cons
- Massively parallel
- Very flexible programming model & schema-on-read
- Extremely verbose & difficult to learn
- Most real applications require multiple MR steps
- 21 MR steps -> 21 mapper and reducer classes
- Lots of boilerplate code per step
- Bad performance
24. Data frames in R / Python
> head(filter(df, df$waiting < 50)) # an example in R
## eruptions waiting
##1 1.750 47
##2 1.750 47
##3 1.867 48
Developed by stats community & concise syntax for ad-hoc analysis
Procedural (not declarative)
25. R data frames Pros and Cons
- Easy to learn
- Pretty fast on a laptop (or one server)
- No parallelism & doesn’t work well on big data
- Lack sophisticated query optimization
27. Which one is better?
Databases, R, MapReduce?
Declarative, procedural, data independence?
28. Spark’s initial focus: a better MapReduce
Language-integrated API (RDD): similar to Scala’s collection library
using functional programming; incredibly powerful and composable
lines = spark.textFile(“hdfs://...”) // RDD[String]
points = lines.map(line => parsePoint(line)) // RDD[Point]
points.filter(p => p.x > 100).count()
Better performance: through a more general DAG abstraction, faster
scheduling, and in-memory caching
30. Challenge with Functional API
Looks high-level, but hides many semantics of computation
• Functions are arbitrary blocks of Java bytecode
• Data stored is arbitrary Java objects
Users can mix APIs in suboptimal ways
32. Example Problem
pairs = data.map(word => (word, 1))
groups = pairs.groupByKey()
groups.map((k, vs) => (k, vs.sum))
Physical API:
Materializes all groups
as Seq[Int] objects
Then promptly
aggregates them
33. Challenge: Data Representation
Java objects often many times larger than underlying fields
class User(name: String, friends: Array[Int])
new User(“Bobby”, Array(1, 2))
User 0x… 0x…
String
3
0
1 2
Bobby
5 0x…
int[]
char[] 5
34. Recap: two primary issues
1. Many APIs specifies the “physical” behavior, rather than the
“logical” intent, aka not declarative enough.
2. Closures (user-defined functions and types) are opaque to the
engine, an as a result little room for improvement.
35. Sort Benchmark
Originally sponsored by Jim Gray to measure advancements in
software and hardware in 1987
Participants often used purpose-built hardware/software to compete
• Large companies: IBM, Microsoft, Yahoo, …
• Academia: UC Berkeley, UCSD, MIT, …
37. Winning Attempt
Built on low-level Spark API and:
- Put all data in off-heap memory using sun.misc.Unsafe
- Use tight low level while loops rather than iterators
~ 3000 lines of low level code on Spark written by Reynold
38. On-Disk Sort Record:
Time to sort 100TB
2100 machines2013 Record:
Hadoop
2014 Record:
Spark
Source: Daytona GraySort benchmark, sortbenchmark.org
72 minutes
207 machines
23 minutes
Also sorted 1PB in 4 hours
39. How do we enable the
average users to win a
world record, using a few
lines of code?
40. Goals of last two year’s API evolution
1. Simpler APIs bridging the gap between big data engineering and
data science.
2. Higher level, declarative APIs that are future proof (engine can
optimize programs automatically).
Taking the best ideas from databases, big data, and data science
42. DataFrames and Spark SQL
Efficient library for structured data (data with a known schema)
• Two interfaces: SQL for analysts + apps, DataFrames for programmers
Optimized computation and storage, similar to RDBMS
SIGMOD 2015
44. DataFrame API
DataFrames hold rows with a known schema and offer relational
operations on them through a DSL
val users = spark.sql(“select * from users”)
val massUsers = users(users(“country”) === “NL”)
massUsers.count()
massUsers.groupBy(“name”).avg(“age”)
Expression AST
47. Spark DataFrame Execution
Python
DF
Logical Plan
Physical
execution
Catalyst
optimizer
Java/Scala
DF
R
DF
Intermediate representation for computation
Simple wrappers to create logical plan
48. Structured API Example
events =
sc.read.json(“/logs”)
stats =
events.join(users)
.groupBy(“loc”,“status”)
.avg(“duration”)
errors = stats.where(
stats.status == “ERR”)
DataFrame API Optimized Plan Specialized Code
SCAN logs SCAN users
JOIN
AGG
FILTER
while(logs.hasNext) {
e = logs.next
if(e.status == “ERR”) {
u = users.get(e.uid)
key = (u.loc, e.status)
sum(key) += e.duration
count(key) += 1
}
}
...
49. Benefit of Logical Plan: Simpler Frontend
Python : ~2000 line of code (built over a weekend)
R : ~1000 line of code
i.e. much easier to add new language bindings (Julia, Clojure, …)
50. Performance
0 2 4 6 8 10
Java/Scala
Python
Runtime for an example aggregation workload
RDD
51. Benefit of Logical Plan:
Performance Parity Across Languages
0 2 4 6 8 10
Java/Scala
Python
Java/Scala
Python
R
SQL
Runtime for an example aggregation workload (secs)
DataFrame
RDD
52. What are Spark’s structured APIs?
Combination of:
- data frame from R as the “interface” – easy to learn
- declarativity & data independence from databases -- easy to optimize &
future-proof
- flexibility & parallelism from MapReduce -- massively scalable & flexible
53. Future possibilities
Spark as a fast, multi-core data collection library
Spark as a performant streaming engine
Spark as a GPU computation framework
All using the same API
54. Python Java/Scala RSQL …
DataFrame
Logical Plan
LLVMJVM SIMD GPUs
Unified API, One Engine, Automatically Optimized
Tungsten
backend
language
frontend
…
55. Recap
We learn from previous generation systems to understand what works,
and what can be improved on, and evolve Spark
Latest APIs take the best ideas out of earlier systems
- data frame from R as the “interface” – easy to learn
- declarativity & data independence from databases -- easy to optimize &
future-proof
- flexibility & parallelism from MapReduce -- massively scalable & flexible