3. What is Spark?
Developed in 2009 at UC Berkeley AMPLab, then
open sourced in 2010, Spark has since become
one of the largest OSS communities in big data,
with over 200 contributors in 50+ organizations
spark.apache.org
“Organizations that are looking at big data challenges –
including collection, ETL, storage, exploration and analytics –
should consider Spark for its in-memory performance and
the breadth of its model. It supports advanced analytics
solutions on Hadoop clusters, including the iterative model
required for machine learning and graph analysis.”
Gartner, Advanced Analytics and Data Science (2014)
3
5. What is Spark?
Spark Core is the general execution engine for the
Spark platform that other functionality is built atop:
!
• in-memory computing capabilities deliver speed
• general execution model supports wide variety
of use cases
• ease of development – native APIs in Java, Scala,
Python (+ SQL, Clojure, R)
5
6. What is Spark?
WordCount in 3 lines of Spark
WordCount in 50+ lines of Java MR
6
7. What is Spark?
Sustained exponential growth, as one of the most
active Apache projects ohloh.net/orgs/apache
7
8. TL;DR: Smashing The Previous Petabyte Sort Record
databricks.com/blog/2014/11/05/spark-officially-sets-
a-new-record-in-large-scale-sorting.html
8
10. A Brief History: Functional Programming for Big Data
Theory, Eight Decades Ago:
what can be computed?
Haskell Curry
haskell.org
Alonso Church
wikipedia.org
Praxis, Four Decades Ago:
algebra for applicative systems
John Backus
acm.org
David Turner
wikipedia.org
Reality, Two Decades Ago:
machine data from web apps
Pattie Maes
MIT Media Lab
10
11. A Brief History: Functional Programming for Big Data
circa late 1990s:
explosive growth e-commerce and machine data
implied that workloads could not fit on a single
computer anymore…
notable firms led the shift to horizontal scale-out
on clusters of commodity hardware, especially
for machine learning use cases at scale
11
13. Amazon
“Early Amazon: Splitting the website” – Greg Linden
glinden.blogspot.com/2006/02/early-amazon-splitting-
website.html
!
eBay
“The eBay Architecture” – Randy Shoup, Dan Pritchett
addsimplicity.com/adding_simplicity_an_engi/
2006/11/you_scaled_your.html
addsimplicity.com.nyud.net:8080/downloads/
eBaySDForum2006-11-29.pdf
!
Inktomi (YHOO Search)
“Inktomi’s Wild Ride” – Erik Brewer (0:05:31 ff)
youtu.be/E91oEn1bnXM
!
Google
“Underneath the Covers at Google” – Jeff Dean (0:06:54 ff)
youtu.be/qsan-GQaeyk
perspectives.mvdirona.com/2008/06/11/
JeffDeanOnGoogleInfrastructure.aspx
!
MIT Media Lab
“Social Information Filtering for Music Recommendation” – Pattie Maes
pubs.media.mit.edu/pubs/papers/32paper.ps
ted.com/speakers/pattie_maes.html
13
14. A Brief History: Functional Programming for Big Data
circa 2002:
mitigate risk of large distributed workloads lost
due to disk failures on commodity hardware…
Google File System
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
research.google.com/archive/gfs.html
!
MapReduce: Simplified Data Processing on Large Clusters
Jeffrey Dean, Sanjay Ghemawat
research.google.com/archive/mapreduce.html
14
17. A Brief History: Functional Programming for Big Data
2002
2004
MapReduce paper
2002
MapReduce @ Google
2004 2006 2008 2010 2012 2014
2006
Hadoop @ Yahoo!
2014
Apache Spark top-level
2010
Spark paper
2008
Hadoop Summit
17
18. A Brief History: Functional Programming for Big Data
MapReduce
Pregel Giraph
Dremel Drill
S4 Storm
F1
MillWheel
General Batch Processing Specialized Systems:
Impala
GraphLab
iterative, interactive, streaming, graph, etc.
Tez
MR doesn’t compose well for large applications,
and so specialized systems emerged as workarounds
18
19. A Brief History: Functional Programming for Big Data
circa 2010:
a unified engine for enterprise data workflows,
based on commodity hardware a decade later…
Spark: Cluster Computing with Working Sets
Matei Zaharia, Mosharaf Chowdhury,
Michael Franklin, Scott Shenker, Ion Stoica
people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf
!
Resilient Distributed Datasets: A Fault-Tolerant Abstraction for
In-Memory Cluster Computing
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave,
Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica
usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf
19
20. A Brief History: Functional Programming for Big Data
In addition to simple map and reduce operations,
Spark supports SQL queries, streaming data, and
complex analytics such as machine learning and
graph algorithms out-of-the-box.
Better yet, combine these capabilities seamlessly
into one integrated workflow…
20
21. A Brief History: Key distinctions for Spark vs. MapReduce
• generalized patterns
⇒ unified engine for many use cases
• lazy evaluation of the lineage graph
⇒ reduces wait states, better pipelining
• generational differences in hardware
⇒ off-heap use of large memory spaces
• functional programming / ease of use
⇒ reduction in cost to maintain large apps
• lower overhead for starting jobs
• less expensive shuffles
21
23. Spark Deconstructed: Log Mining Example
// load error messages from a log into memory!
// then interactively search for various patterns!
// https://gist.github.com/ceteri/8ae5b9509a08c08a1132!
!
// base RDD!
val lines = sc.textFile("hdfs://...")!
!
// transformed RDDs!
val errors = lines.filter(_.startsWith("ERROR"))!
val messages = errors.map(_.split("t")).map(r => r(1))!
messages.cache()!
!
// action 1!
messages.filter(_.contains("mysql")).count()!
!
// action 2!
messages.filter(_.contains("php")).count()
23
24. Driver
Worker
Worker
Worker
Spark Deconstructed: Log Mining Example
We start with Spark running on a cluster…
submitting code to be evaluated on it:
24
25. Spark Deconstructed: Log Mining Example
// base RDD!
val lines = sc.textFile("hdfs://...")!
!
// transformed RDDs!
val errors = lines.filter(_.startsWith("ERROR"))!
val messages = errors.map(_.split("t")).map(r => r(1))!
messages.cache()!
!
// action 1!
messages.filter(_.contains("mysql")).count()!
!
// discussing action 2!
the other part
messages.filter(_.contains("php")).count()
25
26. Spark Deconstructed: Log Mining Example
At this point, take a look at the transformed
RDD operator graph:
scala> messages.toDebugString!
res5: String = !
MappedRDD[4] at map at <console>:16 (3 partitions)!
MappedRDD[3] at map at <console>:16 (3 partitions)!
FilteredRDD[2] at filter at <console>:14 (3 partitions)!
MappedRDD[1] at textFile at <console>:12 (3 partitions)!
HadoopRDD[0] at textFile at <console>:12 (3 partitions)
26
27. Driver
Worker
Worker
Worker
Spark Deconstructed: Log Mining Example
// base RDD!
val lines = sc.textFile("hdfs://...")!
!
// transformed RDDs!
val errors = lines.filter(_.startsWith("ERROR"))!
val messages = errors.map(_.split("t")).map(r => r(1))!
messages.cache()!
!
// action 1!
messages.filter(_.contains("mysql")).count()!
!
// action 2!
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
27
28. Driver
Worker
Worker
block 1
Worker
block 2
block 3
Spark Deconstructed: Log Mining Example
// base RDD!
val lines = sc.textFile("hdfs://...")!
!
// transformed RDDs!
val errors = lines.filter(_.startsWith("ERROR"))!
val messages = errors.map(_.split("t")).map(r => r(1))!
messages.cache()!
!
// action 1!
messages.filter(_.contains("mysql")).count()!
!
// action 2!
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
28
29. Driver
Worker
Worker
block 1
Worker
block 2
block 3
Spark Deconstructed: Log Mining Example
// base RDD!
val lines = sc.textFile("hdfs://...")!
!
// transformed RDDs!
val errors = lines.filter(_.startsWith("ERROR"))!
val messages = errors.map(_.split("t")).map(r => r(1))!
messages.cache()!
!
// action 1!
messages.filter(_.contains("mysql")).count()!
!
// action 2!
medssaigsesc.fuilstesr(i_n.cognt atinhs(e"ph po")t).hcoeuntr() part
29
36. Spark Deconstructed:
Looking at the RDD transformations and
actions from another perspective…
action value
RDD
RDD
RDD
// load error messages from a log into memory!
// then interactively search for various patterns!
// https://gist.github.com/ceteri/8ae5b9509a08c08a1132!
!
// base RDD!
val lines = sc.textFile("hdfs://...")!
!
// transformed RDDs!
val errors = lines.filter(_.startsWith("ERROR"))!
val messages = errors.map(_.split("t")).map(r => r(1))!
messages.cache()!
!
// action 1!
messages.filter(_.contains("mysql")).count()!
!
// action 2!
messages.filter(_.contains("php")).count()
transformations RDD
36
41. Unifying the Pieces: Spark SQL
// http://spark.apache.org/docs/latest/sql-programming-guide.html!
!
val sqlContext = new org.apache.spark.sql.SQLContext(sc)!
import sqlContext._!
!
// define the schema using a case class!
case class Person(name: String, age: Int)!
!
// create an RDD of Person objects and register it as a table!
val people = sc.textFile("examples/src/main/resources/
people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt))!
!
people.registerAsTempTable("people")!
!
// SQL statements can be run using the SQL methods provided by sqlContext!
val teenagers = sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")!
!
// results of SQL queries are SchemaRDDs and support all the !
// normal RDD operations…!
// columns of a row in the result can be accessed by ordinal!
teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
41
42. Unifying the Pieces: Spark Streaming
// http://spark.apache.org/docs/latest/streaming-programming-guide.html!
!
import org.apache.spark.streaming._!
import org.apache.spark.streaming.StreamingContext._!
!
// create a StreamingContext with a SparkConf configuration!
val ssc = new StreamingContext(sparkConf, Seconds(10))!
!
// create a DStream that will connect to serverIP:serverPort!
val lines = ssc.socketTextStream(serverIP, serverPort)!
!
// split each line into words!
val words = lines.flatMap(_.split(" "))!
!
// count each word in each batch!
val pairs = words.map(word => (word, 1))!
val wordCounts = pairs.reduceByKey(_ + _)!
!
// print a few of the counts to the console!
wordCounts.print()!
!
ssc.start() // start the computation!
ssc.awaitTermination() // wait for the computation to terminate
42
43. MLI: An API for Distributed Machine Learning
Evan Sparks, Ameet Talwalkar, et al.
International Conference on Data Mining (2013)
http://arxiv.org/abs/1310.5426
Unifying the Pieces: MLlib
// http://spark.apache.org/docs/latest/mllib-guide.html!
!
val train_data = // RDD of Vector!
val model = KMeans.train(train_data, k=10)!
!
// evaluate the model!
val test_data = // RDD of Vector!
test_data.map(t => model.predict(t)).collect().foreach(println)!
43
46. Spark Streaming: Requirements
Let’s consider the top-level requirements for
a streaming framework:
• clusters scalable to 100’s of nodes
• low-latency, in the range of seconds
(meets 90% of use case needs)
• efficient recovery from failures
(which is a hard problem in CS)
• integrates with batch: many co’s run the
same business logic both online+offline
46
47. Spark Streaming: Requirements
Therefore, run a streaming computation as:
a series of very small, deterministic batch jobs
!
• Chop up the live stream into
batches of X seconds
• Spark treats each batch of
data as RDDs and processes
them using RDD operations
• Finally, the processed results
of the RDD operations are
returned in batches
47
48. Spark Streaming: Requirements
Therefore, run a streaming computation as:
a series of very small, deterministic batch jobs
!
• Batch sizes as low as ½ sec,
latency of about 1 sec
• Potential for combining
batch processing and
streaming processing in
the same system
48
49. Spark Streaming: Integration
Data can be ingested from many sources:
Kafka, Flume, Twitter, ZeroMQ, TCP sockets, etc.
Results can be pushed out to filesystems,
databases, live dashboards, etc.
Spark’s built-in machine learning algorithms and
graph processing algorithms can be applied to
data streams
49
50. Spark Streaming: Timeline
2012
project started
2013
alpha release (Spark 0.7)
2014
graduated (Spark 0.9)
Discretized Streams: A Fault-Tolerant Model
for Scalable Stream Processing
Matei Zaharia, Tathagata Das, Haoyuan Li,
Timothy Hunter, Scott Shenker, Ion Stoica
Berkeley EECS (2012-12-14)
www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf
project lead:
Tathagata Das @tathadas
50
51. Spark Streaming: Requirements
Typical kinds of applications:
• datacenter operations
• web app funnel metrics
• ad optimization
• anti-fraud
• various telematics
and much much more!
51
53. Quiz: name the bits and pieces…
import org.apache.spark.streaming._!
import org.apache.spark.streaming.StreamingContext._!
!
// create a StreamingContext with a SparkConf configuration!
val ssc = new StreamingContext(sparkConf, Seconds(10))!
!
// create a DStream that will connect to serverIP:serverPort!
val lines = ssc.socketTextStream(serverIP, serverPort)!
!
// split each line into words!
val words = lines.flatMap(_.split(" "))!
!
// count each word in each batch!
val pairs = words.map(word => (word, 1))!
val wordCounts = pairs.reduceByKey(_ + _)!
!
// print a few of the counts to the console!
wordCounts.print()!
!
ssc.start()!
ssc.awaitTermination()
53
55. A Look Ahead…
1. Greater Stability and Robustness
• improved high availability via write-ahead logs
• enabled as an optional feature for Spark 1.2
• NB: Spark Standalone can already restart driver
• excellent discussion of fault-tolerance (2012):
cs.duke.edu/~kmoses/cps516/dstream.html
• stay tuned:
meetup.com/spark-users/events/218108702/
55
56. A Look Ahead…
2. Support for more environments,
i.e., beyond Hadoop
• three use cases currently depend on HDFS
• those are being abstracted out
• could then use Cassandra, etc.
56
57. A Look Ahead…
3. Improved support for Python
• e.g., Kafka is not exposed through Python yet
(next release goal)
57
58. A Look Ahead…
4. Better flow control
• a somewhat longer-term goal, plus it is
a hard problem in general
• poses interesting challenges beyond what
other streaming systems have faced
58
60. Demos, as time permits:
brand new Python support for Streaming in 1.2
github.com/apache/spark/tree/master/examples/src/main/
python/streaming
Twitter Streaming Language Classifier
databricks.gitbooks.io/databricks-spark-reference-applications/
content/twitter_classifier/README.html
!
!
For more Spark learning resources online:
databricks.com/spark-training-resources
60
64. Spark Integrations:
Discover
Insights
Clean Up
Your Data
Run
Sophisticated
Analytics
Integrate With
Many Other
Systems
Use Lots of Different
Data Sources
cloud-based notebooks… ETL… the Hadoop ecosystem…
widespread use of PyData… advanced analytics in streaming…
rich custom search… web apps for data APIs…
low-latency + multi-tenancy…
64
65. Spark Integrations: Unified platform for building Big Data pipelines
Databricks Cloud
databricks.com/blog/2014/07/14/databricks-cloud-making-
big-data-easy.html
youtube.com/watch?v=dJQ5lV5Tldw#t=883
65
70. Spark Integrations: Building data APIs with web apps
Spark + Play
typesafe.com/blog/apache-spark-and-the-typesafe-reactive-
platform-a-match-made-in-heaven
unified compute
web apps
70
71. Spark Integrations: The case for multi-tenancy
Spark + Mesos
spark.apache.org/docs/latest/running-on-mesos.html
+ Mesosphere + Google Cloud Platform
ceteri.blogspot.com/2014/09/spark-atop-mesos-on-google-cloud.html
unified compute
cluster resources
71
74. Spark at Twitter: Evaluation & Lessons Learnt
Sriram Krishnan
slideshare.net/krishflix/seattle-spark-meetup-spark-
at-twitter
• Spark can be more interactive, efficient than MR
• Support for iterative algorithms and caching
• More generic than traditional MapReduce
• Why is Spark faster than Hadoop MapReduce?
• Fewer I/O synchronization barriers
• Less expensive shuffle
• More complex the DAG, greater the
performance improvement
74
Because Use Cases: Twitter
75. Using Spark to Ignite Data Analytics
ebaytechblog.com/2014/05/28/using-spark-to-ignite-
data-analytics/
75
Because Use Cases: eBay/PayPal
76. Hadoop and Spark Join Forces in Yahoo
Andy Feng
spark-summit.org/talk/feng-hadoop-and-spark-join-
forces-at-yahoo/
76
Because Use Cases: Yahoo!
77. Collaborative Filtering with Spark
Chris Johnson
slideshare.net/MrChrisJohnson/collaborative-filtering-
with-spark
• collab filter (ALS) for music recommendation
• Hadoop suffers from I/O overhead
• show a progression of code rewrites, converting
a Hadoop-based app into efficient use of Spark
77
Because Use Cases: Spotify
78. Because Use Cases: Sharethrough
Sharethrough Uses Spark Streaming to
Optimize Bidding in Real Time
Russell Cardullo, Michael Ruggier
2014-03-25
databricks.com/blog/2014/03/25/
sharethrough-and-spark-streaming.html
• the profile of a 24 x 7 streaming app is different than
an hourly batch job…
• take time to validate output against the input…
• confirm that supporting objects are being serialized…
• the output of your Spark Streaming job is only as
reliable as the queue that feeds Spark…
• monoids…
78
79. Because Use Cases: Ooyala
Productionizing a 24/7 Spark Streaming
service on YARN
Issac Buenrostro, Arup Malakar
2014-06-30
spark-summit.org/2014/talk/
productionizing-a-247-spark-streaming-service-
on-yarn
• state-of-the-art ingestion pipeline, processing over
two billion video events a day
• how do you ensure 24/7 availability and fault
tolerance?
• what are the best practices for Spark Streaming and
its integration with Kafka and YARN?
• how do you monitor and instrument the various
79
stages of the pipeline?
80. Because Use Cases: Viadeo
Spark Streaming As Near Realtime ETL
Djamel Zouaoui
2014-09-18
slideshare.net/DjamelZouaoui/spark-streaming
• Spark Streaming is topology-free
• workers and receivers are autonomous and
independent
• paired with Kafka, RabbitMQ
• 8 machines / 120 cores
• use case for recommender system
• issues: how to handle lost data, serialization
80
81. Because Use Cases: Stratio
Stratio Streaming: a new approach to
Spark Streaming
David Morales, Oscar Mendez
2014-06-30
spark-summit.org/2014/talk/stratio-streaming-
a-new-approach-to-spark-streaming
• Stratio Streaming is the union of a real-time
messaging bus with a complex event processing
engine using Spark Streaming
• allows the creation of streams and queries on the fly
• paired with Siddhi CEP engine and Apache Kafka
• added global features to the engine such as auditing
81
and statistics
82. Because Use Cases: Guavus
Guavus Embeds Apache Spark
into its Operational Intelligence Platform
Deployed at the World’s Largest Telcos
Eric Carr
2014-09-25
databricks.com/blog/2014/09/25/guavus-embeds-apache-spark-into-
its-operational-intelligence-platform-deployed-at-the-worlds-
largest-telcos.html
• 4 of 5 top mobile network operators, 3 of 5 top
Internet backbone providers, 80% MSOs in NorAm
• analyzing 50% of US mobile data traffic, +2.5 PB/day
• latency is critical for resolving operational issues
before they cascade: 2.5 MM transactions per second
• “analyze first” not “store first ask questions later”
82
83. Why Spark is the Next Top (Compute) Model
Dean Wampler
slideshare.net/deanwampler/spark-the-next-top-
compute-model
• Hadoop: most algorithms are much harder to
implement in this restrictive map-then-reduce
model
• Spark: fine-grained “combinators” for
composing algorithms
• slide #67, any questions?
83
Because Use Cases: Typesafe
84. Installing the Cassandra / Spark OSS Stack
Al Tobey
tobert.github.io/post/2014-07-15-installing-cassandra-
spark-stack.html
• install+config for Cassandra and Spark together
• spark-cassandra-connector integration
• examples show a Spark shell that can access
tables in Cassandra as RDDs with types pre-mapped
and ready to go
84
Because Use Cases: DataStax
85. One platform for all: real-time, near-real-time,
and offline video analytics on Spark
Davis Shepherd, Xi Liu
spark-summit.org/talk/one-platform-for-all-real-
time-near-real-time-and-offline-video-analytics-
on-spark
85
Because Use Cases: Conviva
87. certification:
Apache Spark developer certificate program
• http://oreilly.com/go/sparkcert
• defined by Spark experts @Databricks
• assessed by O’Reilly Media
• establishes the bar for Spark expertise
87
88. community:
spark.apache.org/community.html
video+slide archives: spark-summit.org
events worldwide: goo.gl/2YqJZK
resources: databricks.com/spark-training-resources
workshops: databricks.com/spark-training
Intro to Spark
Spark
AppDev
Spark
DevOps
Spark
DataSci
Distributed ML
on Spark
Streaming Apps
on Spark
Spark +
Cassandra
88
89. books:
Fast Data Processing
with Spark
Holden Karau
Packt (2013)
shop.oreilly.com/product/
9781782167068.do
Spark in Action
Chris Fregly
Manning (2015*)
sparkinaction.com/
Learning Spark
Holden Karau,
Andy Konwinski,
Matei Zaharia
O’Reilly (2015*)
shop.oreilly.com/product/
0636920028512.do
89
90. events:
Big Data Spain
Madrid, Nov 17-18
bigdataspain.org
Strata EU
Barcelona, Nov 19-21
strataconf.com/strataeu2014
Data Day Texas
Austin, Jan 10
datadaytexas.com
Strata CA
San Jose, Feb 18-20
strataconf.com/strata2015
Spark Summit East
NYC, Mar 18-19
spark-summit.org/east
Strata EU
London, May 5-7
strataconf.com/big-data-conference-uk-2015
Spark Summit 2015
SF, Jun 15-17
spark-summit.org
90
91. presenter:
monthly newsletter for updates,
events, conf summaries, etc.:
liber118.com/pxn/
Just Enough Math
O’Reilly, 2014
justenoughmath.com
preview: youtu.be/TQ58cWgdCpA
Enterprise Data Workflows
with Cascading
O’Reilly, 2013
shop.oreilly.com/product/
0636920028536.do 91