This document provides an overview of next generation analytics with YARN, Spark and GraphLab. It discusses how YARN addressed limitations of Hadoop 1.0 like scalability, locality awareness and shared cluster utilization. It also describes the Berkeley Data Analytics Stack (BDAS) which includes Spark, and how companies like Ooyala and Conviva use it for tasks like iterative machine learning. GraphLab is presented as ideal for processing natural graphs and the PowerGraph framework partitions such graphs for better parallelism. PMML is introduced as a standard for defining predictive models, and how a Naive Bayes model can be defined and scored using PMML with Spark and Storm.
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Next generation analytics with yarn, spark and graph lab
1. Next Generation Analytics with
YARN, Spark and GraphLab
Dr. Vijay Srinivas Agneeswaran
Director and Head - Big-data R&D
Impetus Technologies Inc.
1
2. Contents
Big Data Computations
Hadoop 2.0 (Hadoop YARN)
Berkeley data
analytics stack
PMML
Scoring for
Naïve Bayes
GraphLab
• Spark
• Spark Streaming
• PMML Primer
• Naïve Bayes Primer
2
3. Big Data Computations
Computations/Operations
Giant 1 (simple stats) is perfect
for Hadoop 1.0.
Giants 2 (linear algebra), 3 (Nbody), 4 (optimization) Spark
from UC Berkeley is efficient.
Interactive/On-the-fly data
processing – Storm.
Logistic regression, kernel SVMs,
conjugate gradient descent,
collaborative filtering, Gibbs
sampling, alternating least squares.
Example is social group-first
approach for consumer churn
analysis [2]
OLAP – data cube operations.
Dremel/Drill
Data sets – not embarrassingly
parallel?
Machine vision from Google [3]
Deep Learning
Artificial Neural Networks
Speech analysis from Microsoft
Giant 5 – Graph processing –
GraphLab, Pregel, Giraph
3
[1] National Research Council. Frontiers in Massive Data Analysis . Washington, DC: The National Academies Press, 2013.
[2] Richter, Yossi ; Yom-Tov, Elad ; Slonim, Noam: Predicting Customer Churn in Mobile Networks through Analysis of Social
Groups. In: Proceedings of SIAM International Conference on Data Mining, 2010, S. 732-741
[3] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc'Aurelio
Ranzato, Andrew W. Senior, Paul A. Tucker, Ke Yang, Andrew Y. Ng: Large Scale Distributed Deep Networks. NIPS 2012:
4. Iterative ML Algorithms
What are iterative algorithms?
Those that need communication among the computing entities
Examples – neural networks, PageRank algorithms, network traffic analysis
Conjugate gradient descent
Commonly used to solve systems of linear equations
[CB09] tried implementing CG on dense matrices
DAXPY – Multiplies vector x by constant a and adds y.
DDOT – Dot product of 2 vectors
MatVec – Multiply matrix by vector, produce a vector.
1 MR per primitive – 6 MRs per CG iteration, hundreds of MRs per CG
computation, leading to 10 of GBs of communication even for small
matrices.
Other iterative algorithms – fast fourier transform, block tridiagonal
[CB09] C. Bunch, B. Drawert, M. Norman, Mapscale: a cloud environment for scientific computing,
Technical Report, University of California, Computer Science Department, 2009.
5. Hadoop YARN Requirements or 1.0 shortcomings
R1: Scalability
R2: Multi-tenancy
• single cluster limitation
• Addressed by Hadoopon-Demand
• Security, Quotas
R3: Locality
awareness
R4: Shared cluster
utilization
• Shuffle of records
• Hogging by users
• Typed slots
R5:
Reliability/Availability
• Job Tracker bugs
R6: Iterative
Machine Learning
5
Vinod Kumar Vavilapalli, Arun C Murthy , Chris Douglas, Sharad Agarwal, Mahadev Konar, Robert Evans, Thomas
Graves, Jason Lowe , Hitesh Shah, Siddharth Seth, Bikas Saha, Carlo Curino, Owen O'Malley, Sanjay Radia, Benjamin
Reed, and Eric Baldeschwieler, “Apache Hadoop YARN: Yet Another Resource Negotiator”, ACM Symposium on Cloud
Computing, Oct 2013, ACM Press.
7. YARN Internals
Application Master
• Sends
ResourceRequests
to the YARN RM
• Captures
containers,
resources per
container, locality
preferences.
YARN RM
• Generates tokens
and containers
• Global view of
cluster – monolithic
scheduling.
Node Manager
• Node health
monitoring,
advertise available
resources through
heartbeats to RM.
7
9. BDAS: Spark
Transformations/Actions
Map(function f1)
Filter(function f2)
flatMap(function f3)
Union(RDD r1)
Sample(flag, p, seed)
groupByKey(noTasks)
Description
Pass each element of the RDD through f1 in parallel and return the resulting RDD.
Select elements of RDD that return true when passed through f2.
Similar to Map, but f3 returns a sequence to facilitate mapping single input to multiple
outputs.
Returns result of union of the RDD r1 with the self.
Returns a randomly sampled (with seed) p percentage of the RDD.
Can only be invoked on key-value paired data – returns data grouped by value. No. of
parallel tasks is given as an argument (default is 8).
Aggregates result of applying f4 on elements with same key. No. of parallel tasks is the
second argument.
Joins RDD r2 with self – computes all possible pairs for given key.
Joins RDD r3 with self and groups by key.
reduceByKey(function f4,
noTasks)
Join(RDD r2, noTasks)
groupWith(RDD r3,
noTasks)
sortByKey(flag)
Sorts the self RDD in ascending or descending based on flag.
Reduce(function f5)
Aggregates result of applying function f5 on all elements of self RDD
Collect()
Return all elements of the RDD as an array.
Count()
Count no. of elements in RDD
take(n)
Get first n elements of RDD.
First()
Equivalent to take(1)
saveAsTextFile(path)
Persists RDD in a file in HDFS or other Hadoop supported file system at given path.
saveAsSequenceFile(path Persist RDD as a Hadoop sequence file. Can be invoked only on key-value paired RDDs
)
that implement Hadoop writable interface or equivalent.
foreach(function f6)
Run f6 in parallel on elements of self Ankur
[MZ12] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das,RDD. Dave, Justin Ma, Murphy McCauley, Michael
J. Franklin, Scott Shenker, and Ion Stoica. 2012. Resilient distributed datasets: a fault-tolerant abstraction for inmemory cluster computing. In Proceedings of the 9th USENIX conference on Networked Systems Design and
Implementation (NSDI'12). USENIX Association, Berkeley, CA, USA, 2-2.
10. BDAS: Use Cases
Ooyala
Conviva
Uses Cassandra for
video data
personalization.
Uses Hive for
repeatedly running
ad-hoc queries on
video data.
Pre-compute
aggregates VS onthe-fly queries.
Optimized ad-hoc
queries using Spark
RDDs – found Spark
is 30 times faster
than Hive
Moved to Spark for
ML and computing
views.
ML for connection
analysis and video
streaming
optimization.
10
Moved to Shark for on-the-fly
queries – C* OLAP aggregate
queries on Cassandra 130 secs, 60
ms in Spark
Yahoo
Advertisement
targeting: 30K nodes
on Hadoop Yarn
Hadoop – batch processing
Spark – iterative processing
Storm – on-the-fly processing
Content
recommendation –
collaborative
filtering
12. PMML Primer
Predictive Model Markup
Language
Developed by DMG (Data
Mining Group)
PMML offers a standard
to define a model, so that
a model generated in
tool-A can be directly
used in tool-B.
XML representation of a
model.
May contain a myriad of
data transformations
(pre- and post-processing)
as well as one or more
predictive models.
12
13. Naïve Bayes Primer
A simple probabilistic
classifier based on
Bayes Theorem
Given features
X1,X2,…,Xn, predict a
label Y by calculating
the probability for all
possible Y value
Likelihood
Normalization Constant
Prior
13
14. PMML Scoring for Naïve Bayes
Wrote a PMML based
scoring engine for
Naïve Bayes
algorithm.
This can theoretically
be used in any
framework for data
processing by
invoking the API
Deployed a Naïve
Bayes PMML
generated from R into
Storm / Spark and
Samza frameworks
Real time predictions
with the above APIs
14
15. Header
• Version and timestamp
• Model development
environment information
Data Dictionary
• Variable types, missing
valid and invalid values,
Data
Munging/Transformation
• Normalization, mapping,
discretization
Model
• Model specifi attributes
• Mining Schema
• Treatment for missing
and outlier values
• Targets
• Prior probability and
default
• Outputs
• List of computer output
fields
• Post-processing
• Definition of model
architecture/parameters.
15
16. GraphLab: Ideal Engine for Processing Natural Graphs [YL12]
Goals – targeted at machine
learning.
• Model graph dependencies, be
asynchronous, iterative, dynamic.
Data associated with edges
(weights, for instance) and
vertices (user profile data, current
interests etc.).
Update functions – lives on each
vertex
Consistency is important in ML
algorithms (some do not even
converge when there are
inconsistent updates –
collaborative filtering).
• Transforms data in scope of vertex.
• Can choose to trigger neighbours (for
example only if Rank changes drastically)
• Run asynchronously till convergence –
no global barrier.
• GraphLab – provides varying level of
consistency. Parallelism VS consistency.
Implemented several
algorithms, including ALS, Kmeans, SVM, Belief
propagation, matrix
factorization, Gibbs
sampling, SVD, CoEM etc.
• Co-EM (Expectation Maximization)
algorithm 15x faster than Hadoop MR –
on distributed GraphLab, only 0.3% of
Hadoop execution time.
[YL12] Yucheng Low, Danny Bickson, Joseph Gonzalez, Carlos Guestrin, Aapo Kyrola, and Joseph M. Hellerstein. 2012. Distributed
GraphLab: a framework for machine learning and data mining in the cloud. Proceedings of the VLDB Endowment 5, 8 (April 2012), 716-727.
17. GraphLab 2: PowerGraph – Modeling Natural Graphs [1]
GraphLab could not
scale to Altavista web
graph 2002, 1.4B
vertices, 6.7B edges.
Powergraph provides
new way of
partitioning power law
graphs
• Most graph parallel
abstractions assume small
neighbourhoods – low
degree vertices
• But natural graphs
(LinkedIn, Facebook,
Twitter) – power law
graphs.
• Hard to partition power law
graphs, high degree
vertices limit parallelism.
• Edges are tied to
machines, vertices (esp.
high degree ones) span
machines
• Execution split into 3
phases:
• Gather, apply and
scatter.
Triangle counting on
Twitter graph
• Hadoop MR took 423
minutes on 1536 machines
• GraphLab 2 took 1.5
minutes on 1024 cores (64
machines)
[1] Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin (2012). "PowerGraph:
Distributed Graph-Parallel Computation on Natural Graphs." Proceedings of the 10th USENIX Symposium
on Operating Systems Design and Implementation (OSDI '12).
21. PMML Scoring for Naïve Bayes
Definition Of Elements:DataDictionary :
Definitions for fields as used in mining models
( Class, V1, V2, V3 )
NaiveBayesModel :
Indicates that this is a NaiveBayes PMML
MiningSchema : lists fields as used in that model.
Class is “predicted” field,
V1,V2,V3 are “active” predictor fields
Output:
Describes a set of result values that can be returned
from a model
21
22. PMML Scoring for Naïve Bayes
Definition Of Elements (ctd .. ) :BayesInputs:
For each type of inputs, contains the counts of
outputs
BayesOutput:
Contains the counts associated with the values of the
target field
22
23. PMML Scoring for Naïve Bayes
Sample Input
Eg1 - n y y n y y n n n n n n y y y y
Eg2 - n y n y y y n n n n n y y y n y
• 1st , 2nd and 3rd Columns:
Predictor variables ( Attribute “name” in element MiningField )
• Using these we predict whether the Output is Democrat or
Republican ( PMML element BayesOutput)
23
24. PMML Scoring for Naïve Bayes
• 3 Node Xeon Machines Storm cluster ( 8
quad code CPUs, 32 GB RAM, 32 GB
Swap space, 1 Nimbus, 2 Supervisors )
Number of records ( in
millions )
Time Taken (seconds)
0.1
4
0.4
7
1.0
12
2.0
21
10
129
25
310
24
25. PMML Scoring for Naïve Bayes
• 3 Node Xeon Machines Spark cluster( 8
quad code CPUs, 32 GB RAM and 32
GB Swap space )
Number of records ( in
millions )
Time Taken (
0.1
1 min 47 sec
0.2
3 min 35 src
0.4
6 min 40 secs
1.0
35 mins 17 sec
10
More than 3 hrs
25
26. Future of Spark
•
Domain specific language approach from
Stanford.
•
•
•
Forge [AKS13] – a meta DSL for high
performance DSLs.
40X faster than Spark!
Spark
•
Explore BLAS libraries for efficiency
26
[Arvind K. Sujeeth, Austin Gibbons, Kevin J. Brown, HyoukJoong Lee, Tiark Rompf, Martin Odersky, and Kunle
Olukotun. 2013. Forge: generating a high performance DSL implementation from a declarative specification.
In Proceedings of the 12th international conference on Generative programming: concepts &
experiences (GPCE '13). ACM, New York, NY, USA, 145-154.
27. Conclusion
• Beyond Hadoop Map-Reduce philosophy
• Optimization and other problems.
• Real-time computation
• Processing specialized data structures
• PMML scoring
• Spark for batch computations
• Spark streaming and Storm for real-time.
27
• Allows traditional analytical tools/algorithms to be
re-used.