SlideShare ist ein Scribd-Unternehmen logo
1 von 67
Downloaden Sie, um offline zu lesen
Extending Spark ML
Estimators and Transformers
kroszk@
Built with
public APIs*
*Scala only - see developer for details.
Holden:
● My name is Holden Karau
● Prefered pronouns are she/her
● I’m a Principal Software Engineer at IBM’s Spark Technology Center
● Apache Spark committer (as of January!) :)
● previously Alpine, Databricks, Google, Foursquare & Amazon
● co-author of Learning Spark & Fast Data processing with Spark
○ co-author of a new book focused on Spark performance coming this year*
● @holdenkarau
● Slide share http://www.slideshare.net/hkarau
● Linkedin https://www.linkedin.com/in/holdenkarau
● Github https://github.com/holdenk
● Spark Videos http://bit.ly/holdenSparkVideos
Seth:
● Data Scientist at Cloudera
● Previously machine learning engineer at IBM’s Spark Technology Center
● Two years contributing to Spark MLlib
● Twitter: @shendrickson16
● Linkedin https://www.linkedin.com/in/sethah
● Github https://github.com/sethah
● SlideShare http://www.slideshare.net/SethHendrickson
Spark Technology
Center
5
IBM
Spark
Technology
Center
Founded in 2015.
Location:
Physical: 505 Howard St., San Francisco CA
Web: http://spark.tc Twitter: @apachespark_tc
Mission:
Contribute intellectual and technical capital to the Apache Spark
community.
Make the core technology enterprise- and cloud-ready.
Build data science skills to drive intelligence into business
applications — http://bigdatauniversity.com
Key statistics:
About 50 developers, co-located with 25 IBM designers.
Major contributions to Apache Spark http://jiras.spark.tc
Apache SystemML is now an Apache Incubator project.
Founding member of UC Berkeley AMPLab and RISE Lab
Member of R Consortium and Scala Center
Spark Technology
Center
Who I think you wonderful humans are?
● Nice enough people
● Don’t mind pictures of cats
● Might know some Apache Spark
● Possibly know some Scala
● Think machine learning is kind of cool
● Don’t overly mind a grab-bag of topics
Lori Erickson
What are we going to talk about?
● What Spark ML pipelines look like
● What Estimators and Transformers are
● How to implement both of them
● What tools can help us
● Publishing your fancy new Spark model so other’s (like me) can use it!
● Holden will of course try and sell you many copies of her new book if you
have an expense account.
Loading data Spark SQL (DataSets)
sparkSession.read returns a DataFrameReader
We can specify general properties & data specific options
● option(“key”, “value”)
○ spark-csv ones we will use are header & inferSchema
● format(“formatName”)
○ built in formats include parquet, jdbc, etc.
● load(“path”)
Jess Johnson
Loading some simple CSV data
val df = spark.read
.option("inferSchema", "true")
.option("delimiter", ";")
.format("csv")
.load("hdfs:///user/data/admissions.csv")
Jess Johnson
Spark ML Pipelines
Pipeline
Stage ?data
Pipeline
Stage
Pipeline
Stage
Pipeline
Stage
...
Pipeline
Spark ML Pipelines
Pipeline
Stage ?data
Pipeline
Stage
Pipeline
Stage
Pipeline
Stage
...
Pipeline
data ?
Also a pipeline stage!
Two main types of pipeline stages
Pipeline
Stage ?data
Transformer Estimatordata data data transformer
Pipelines are estimators
Pipeline
data model
Also an estimator!
Transformer Transformer Estimator
PipelineModels are transformers
PipelineModel
data data
Also a transformer!
Transformer Transformer Transformer
How are transformers made?
Estimator
data
class Estimator extends PipelineStage {
def fit(dataset: Dataset[_]): Transformer = {
// magic happens here
}
}
Transformer
How is new data made?
Transformer ( data )
class Transformer extends PipelineStage {
def transform(df: Dataset[_]): DataFrame
}
new data.transform
Feature transformations
+-----+-----+----+--------+
|admit| gre| gpa|prestige|
+-----+-----+----+--------+
| no|380.0|3.61| 3.0|
| yes|660.0|3.67| 3.0|
| yes|800.0| 4.0| 1.0|
| yes|640.0|3.19| 4.0|
| no|520.0|2.93| 4.0|
+-----+-----+----+--------+
val assembler = new VectorAssembler()
.setInputCols(Array("gre", "gpa", "prestige"))
val df2 = assembler.transform(df)
VectorAssembler
+-----+-----+----+--------+----------------+
|admit| gre| gpa|prestige| features|
+-----+-----+----+--------+----------------+
| no|380.0|3.61| 3.0|[380.0,3.61,3.0]|
| yes|660.0|3.67| 3.0|[660.0,3.67,3.0]|
| yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]|
| yes|640.0|3.19| 4.0|[640.0,3.19,4.0]|
| no|520.0|2.93| 4.0|[520.0,2.93,4.0]|
+-----+-----+----+--------+----------------+
Train a classifier on the transformed data
StringIndexer
StringIndexerModel
val si = new StringIndexer().setInputCol("admit").setOutputCol("label")
val siModel = si.fit(df2)
val df3 = siModel.transform(df2)
+-----+-----+----+--------+----------------+
|admit| gre| gpa|prestige| features|
+-----+-----+----+--------+----------------+
| no|380.0|3.61| 3.0|[380.0,3.61,3.0]|
| yes|660.0|3.67| 3.0|[660.0,3.67,3.0]|
| yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]|
| yes|640.0|3.19| 4.0|[640.0,3.19,4.0]|
| no|520.0|2.93| 4.0|[520.0,2.93,4.0]|
+-----+-----+----+--------+----------------+
+-----+-----+----+--------+----------------+-----+
|admit| gre| gpa|prestige| features|label|
+-----+-----+----+--------+----------------+-----+
| no|380.0|3.61| 3.0|[380.0,3.61,3.0]| 0.0|
| yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| 1.0|
| yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| 1.0|
| yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| 1.0|
| no|520.0|2.93| 4.0|[520.0,2.93,4.0]| 0.0|
+-----+-----+----+--------+----------------+-----+
Train a classifier on the transformed data
+----------------+-----+
| features|label|
+----------------+-----+
|[380.0,3.61,3.0]| 0.0|
|[660.0,3.67,3.0]| 1.0|
| [800.0,4.0,1.0]| 1.0|
|[640.0,3.19,4.0]| 1.0|
|[520.0,2.93,4.0]| 0.0|
+----------------+-----+
DecisionTreeClassifier
DecisionTree
ClassificationModel
+----------------+-----+----------+
| features|label|prediction|
+----------------+-----+----------+
|[380.0,3.61,3.0]| 0.0| 0.0|
|[660.0,3.67,3.0]| 1.0| 0.0|
| [800.0,4.0,1.0]| 1.0| 1.0|
|[640.0,3.19,4.0]| 1.0| 1.0|
|[520.0,2.93,4.0]| 0.0| 0.0|
+----------------+-----+----------+
val dt = new DecisionTreeClassifier()
val dtModel = dt.fit(df3)
val df4 = dtModel.transform(df3)
Or just throw it all in a pipeline
● Keeping track of intermediate data and calling fit/transform on every stage is
way too much work
● This problem is worse when more stages are used
● Use a pipeline instead!
val assembler = new VectorAssembler()
assembler.setInputCols(Array("gre", "gpa", "prestige"))
val sb = new StringIndexer()
sb.setInputCol("admit").setOutputCol("label")
val dt = new DecisionTreeClassifier()
val pipeline = new Pipeline()
pipeline.setStages(Array(assembler, sb, dt))
val pipelineModel = pipeline.fit(df)
Yay! You have an ML pipeline!
Photo by Jessica Fiess-Hill
Pipeline API has many models:
● org.apache.spark.ml.classification
○ BinaryLogisticRegressionClassification, DecissionTreeClassification,
GBTClassifier, etc.
● org.apache.spark.ml.regression
○ DecissionTreeRegression, GBTRegressor, IsotonicRegression,
LinearRegression, etc.
● org.apache.spark.ml.recommendation
○ ALS
● You can also check out spark-packages for some more
● But possible not your special AwesomeFooBazinatorML
PROcarterse Follow
& data prep stages...
● org.apache.spark.ml.feature
○ ~30 elements from VectorAssembler to Tokenizer, to PCA, etc.
● Often simpler to understand while getting started with
building our own stages
PROcarterse Follow
So now begins our adventure to add stages
So what does a pipeline stage look like?
Must provide:
● transformSchema (used to validate input schema is
reasonable) & copy
Often have:
● Special params for configuration (so we can do
meta-algorithms)
Wendy Piersall
Building a simple transformer:
class HardCodedWordCountStage(override val uid: String) extends Transformer {
def this() = this(Identifiable.randomUID("hardcodedwordcount"))
def copy(extra: ParamMap): HardCodedWordCountStage = {
defaultCopy(extra)
}
...
}
Not to be confused with the Transformers franchise from Hasbro and Tomy.
Verify the input schema is reasonable:
override def transformSchema(schema: StructType): StructType = {
// Check that the input type is a string
val idx = schema.fieldIndex("happy_pandas")
val field = schema.fields(idx)
if (field.dataType != StringType) {
throw new Exception(s"Input type ${field.dataType} did not match
input type StringType")
}
// Add the return field
schema.add(StructField("happy_panda_counts", IntegerType, false))
}
How is transformSchema used?
● When you call fit on a pipeline it calls transformSchema
on the pipeline stages in order
● This is used to verify that things should work
● Ideally allows pipelines to fail fast when misconfigured,
instead of at the final stage of a 48-hour process
● Doesn’t always work that way :p
Tricia Hall
Do the “work” (e.g. predict labels or w/e):
def transform(df: Dataset[_]): DataFrame = {
val wordcount = udf { in: String => in.split(" ").size }
df.select(col("*"),
wordcount(df.col("happy_pandas")).as("happy_panda_counts"))
}
vic15
What about configuring our stage?
class ConfigurableWordCount(override val uid: String) extends
Transformer {
final val inputCol= new Param[String](this, "inputCol", "The input
column")
final val outputCol = new Param[String](this, "outputCol", "The
output column")
def setInputCol(value: String): this.type = set(inputCol, value)
def setOutputCol(value: String): this.type = set(outputCol, value)
Jason Wesley Upton
So why do we configure it that way?
● Allow meta algorithms to work on it
● If you look inside of spark you’ll see “sharedParams” for
common params (like input column)
● We can’t access those unless we pretend to be inside of
org.apache.spark - so we have to make our own
Tricia Hall
So how to make an estimator?
● Very similar, instead of directly providing transform
provide a `fit` which returns a “model” which implements
the estimator interface as shown above
● Also take a look at the algorithms in Spark itself (helpful
traits you can mixin to take care of many common things).
● Let’s look at a simple one now!
sneakerdog
A simple string indexer estimator
class SimpleIndexer(override val uid: String) extends
Estimator[SimpleIndexerModel] with SimpleIndexerParams {
….
override def fit(dataset: Dataset[_]): SimpleIndexerModel = {
import dataset.sparkSession.implicits._
val words = dataset.select(dataset($(inputCol)).as[String]).distinct
.collect()
new SimpleIndexerModel(uid, words)
}
}
Quick aside: What’ts that “$(inputCol)”?
● How you get access to a configuration parameter
● Inside stage only (external use getInputCol just like
Java™ :p)
And our friend the transformer is back:
class SimpleIndexerModel(
override val uid: String, words: Array[String]) extends
Model[SimpleIndexerModel] with SimpleIndexerParams {
...
private val labelToIndex: Map[String, Double] = words.zipWithIndex.
map{case (x, y) => (x, y.toDouble)}.toMap
override def transform(dataset: Dataset[_]): DataFrame = {
val indexer = udf { label: String => labelToIndex(label) }
dataset.select(col("*"),
indexer(dataset($(inputCol)).cast(StringType)).as($(outputCol)))
Still not to be confused with the Transformers franchise from Hasbro and Tomy.
Ok so how do you make the train function?
● Read some papers on the algorithm(s) you care about
● Most likely some iterative approach (pro-tip: RDDs >
Datasets for iterative)
○ Seth has some interesting work around pluggable
optimizers
● Closed form solution? Go have a party!
What else can you add to your models?
● Put in an ML pipeline
● Do hyper-parameter tuning
And if you have some coffee left over:
● Persistence*
○ MLWriter & MLReader give you the basics
○ You’ll have to do a lot of work yourself :(
● Serving*
*With enough coffee. Not guaranteed.
Ok so I put my new fancy thing on GitHub
● Yay thank you!
● Please publish to maven central
● Also consider listing on spark-packages + user@ list
○ Let me know ( holden@pigscanfly.ca ) :)
● Think of the Python users (and I guess the R users) too?
Custom Estimators/Transformers in the Wild
Classification/Regression
xgboost
Deep Learning!
MXNet
Feature Transformation
FeatureHasher
More resources:
● High Performance Spark Example Repo has some
sample models
○ Of course buy several copies of the book - it is the gift of the season :p
● The models inside of Spark itself (use some internal APIs
but a good starting point)
● Nick Pentreath’s FeatureHasher
● O’Reilly radar blog post
https://www.oreilly.com/learning/extend-structured-streami
ng-for-spark-ml
Captain Pancakes
Learning Spark
Fast Data
Processing with
Spark
(Out of Date)
Fast Data
Processing with
Spark
(2nd edition)
Advanced
Analytics with
Spark
Spark in Action
Coming soon:
High Performance Spark
Learning PySpark
The next book…..
Available in “Early Release”*:
● Buy from O’Reilly - http://bit.ly/highPerfSpark
● Extending ML is covered in Chapter 9
Get notified when updated & finished:
● http://www.highperformancespark.com
● https://twitter.com/highperfspark
● Should be finished between May 22nd ~ June 18th :D
* Early Release means extra mistakes, but also a chance to help us make a more awesome
book.
And some upcoming talks:
● June
○ Berlin Buzzwords
○ Scala Swarm (Porto, Portugal)
k thnx bye :)
If you care about Spark testing and
don’t hate surveys:
http://bit.ly/holdenTestingSpark
Will tweet results
“eventually” @holdenkarau
Any PySpark Users: Have some
simple UDFs you wish ran faster
you are willing to share?:
http://bit.ly/pySparkUDF
Pssst: Have feedback on the presentation? Give me a
shout (holden@pigscanfly.ca) if you feel comfortable doing
so :)
Bonus/Appendix slides
Cross-validation
because saving a test set is effort
● Automagically* fit your model params
● Because thinking is effort
● org.apache.spark.ml.tuning has the tools
Jonathan Kotta
Cross-validation
because saving a test set is effort & a reason to integrate
// ParamGridBuilder constructs an Array of parameter
combinations.
val paramGrid: Array[ParamMap] = new ParamGridBuilder()
.addGrid(nb.smoothing, Array(0.1, 0.5, 1.0, 2.0))
.build()
val cv = new CrossValidator()
.setEstimator(pipeline)
.setEstimatorParamMaps(paramGrid)
val cvModel = cv.fit(df)
val bestModel = cvModel.bestModel
Jonathan Kotta
So what does a pipeline stage look like?
Are either an:
● Estimator - has a method called “fit” which returns a
Transformer (e.g. NaiveBayes, etc.)
● Transformer - no need to train can directly transform (e.g.
HashingTF, VectorAssembler, etc.) (with transform)
Wendy Piersall
We’ve left out a lot of “transformSchema”...
● It is necessary (but I’m lazy)
● But there are helper classes that can implement some of
the boiler plate we’ve been skipping
● Classifier & Estimator base classes are your friends
● They provide transformSchema
Let’s make a Classifier* :)
// Example only - not for production use.
class SimpleNaiveBayes(val uid: String)
extends Classifier[Vector, SimpleNaiveBayes, SimpleNaiveBayesModel] {
Input type Trained Model
Let’s make a Classifier* :)
override def train(ds: Dataset[_]): SimpleNaiveBayesModel = {
import ds.sparkSession.implicits._
ds.cache()
….
…
….
}
If you reallllly want to see inside the ...s (1/5)
// Get the number of features by peaking at the first row
val numFeatures: Integer = ds.select(col($(featuresCol))).head
.get(0).asInstanceOf[Vector].size
// Determine the number of records for each class
val groupedByLabel = ds.select(col($(labelCol)).as[Double]).groupByKey(x =>
x)
val classCounts = groupedByLabel.agg(count("*").as[Long])
.sort(col("value")).collect().toMap
// Select the labels and features so we can more easily map over them.
// Note: we do this as a DataFrame using the untyped API because the Vector
// UDT is no longer public.
val df = ds.select(col($(labelCol)).cast(DoubleType), col($(featuresCol)))
If you reallllly want to see inside the ...s (2/5)
// Note: you can use getNumClasses & extractLabeledPoints to get an RDD
instead
// Using the RDD approach is common when integrating with legacy machine
learning code
// or iterative algorithms which can create large query plans.
// Here we use `Datasets` since neither of those apply.
// Compute the number of documents
val numDocs = ds.count
// Get the number of classes.
// Note this estimator assumes they start at 0 and go to numClasses
val numClasses = getNumClasses(ds)
If you reallllly want to see inside the ...s (3/5)
// Figure out the non-zero frequency of each feature for each label and
// output label index pairs using a case clas to make it easier to work
with.
val labelCounts: Dataset[LabeledToken] = df.flatMap {
case Row(label: Double, features: Vector) =>
features.toArray.zip(Stream from 1)
.filter{vIdx => vIdx._2 == 1.0}
.map{case (v, idx) => LabeledToken(label, idx)}
}
// Use the typed Dataset aggregation API to count the number of non-zero
// features for each label-feature index.
val aggregatedCounts: Array[((Double, Integer), Long)] = labelCounts
.groupByKey(x => (x.label, x.index))
.agg(count("*").as[Long]).collect()
val theta = Array.fill(numClasses)(new Array[Double](numFeatures))
If you reallllly want to see inside the ...s (4/5)
// Compute the denominator for the general prioirs
val piLogDenom = math.log(numDocs + numClasses)
// Compute the priors for each class
val pi = classCounts.map{case(_, cc) =>
math.log(cc.toDouble) - piLogDenom }.toArray
// For each label/feature update the probabilities
aggregatedCounts.foreach{case ((label, featureIndex), count) =>
// log of number of documents for this label + 2.0 (smoothing)
val thetaLogDenom = math.log(
classCounts.get(label).map(_.toDouble).getOrElse(0.0) + 2.0)
theta(label.toInt)(featureIndex) = math.log(count + 1.0) - thetaLogDenom
}
// Unpersist now that we are done computing everything
ds.unpersist()
If you reallllly want to see inside the ...s (5/5)
// Construct a model
new SimpleNaiveBayesModel(uid, numClasses, numFeatures, Vectors.dense(pi),
new DenseMatrix(numClasses, theta(0).length, theta.flatten, true))
}
override def copy(extra: ParamMap) = {
defaultCopy(extra)
}
}
What is Spark?
● General purpose distributed system
○ With a really nice API including Python :)
● Apache project (one of the most active)
● Much faster than Hadoop Map/Reduce
● Good when data is too big for a single
machine
● Built on top of two abstractions for
distributed data: RDDs & Datasets
DataFrames & Datasets
Totally the future
● Distributed collection
● Recomputed on node failure
● Distributes data & work across the cluster
● Lazily evaluated (transformations & actions)
● Has runtime schema information
● Allows for relational queries & supports SQL
● Declarative - many optimizations applied automagically
● Input for Spark Machine Learning
Helen Olney
What is the performance like?
Andrew Skudder
Spark ML pipelines
Tokenizer HashingTF String Indexer Naive Bayes
Tokenizer HashingTF String Indexer Naive Bayes
fit(df)
Estimator
Transformer
● Consist of different stages (estimators or transformers)
● Themselves are an estimator
We are going to
build a stage
together!
Minimal data prep:
● At a minimum most algorithms in Spark work on feature
vectors of doubles (and if labeled - doubles too)
Imports:
import org.apache.spark.ml._
import org.apache.spark.ml.feature._
import org.apache.spark.ml.classification._
import org.apache.spark.ml.linalg.{Vector => SparkVector}
Huang
Yun
Chung
Minimal prep continued
// Combines a list of double input features into a vector
val assembler = new VectorAssembler()
assembler.setInputCols(Array("age", "education-num"))
// String indexer converts a set of strings into doubles
val sb = new StringIndexer()
sb.setInputCol("category").setOutputCol("category-index")
// Can be used to combine pipeline components together
val pipeline = new Pipeline()
pipeline.setStages(Array(assembler, sb))
Huang
Yun
Chung
Minimal prep continued
val assembler = new VectorAssembler()
assembler.setInputCols(Array("gre", "gpa", "prestige"))
val si = new StringIndexer()
si.setInputCol("admit").setOutputCol("label")
val pipeline = new Pipeline()
pipeline.setStages(Array(assembler, si))
Huang
Yun
Chung
+-----+-----+----+--------+----------------+-----+
|admit| gre| gpa|prestige| features|label|
+-----+-----+----+--------+----------------+-----+
| no|380.0|3.61| 3.0|[380.0,3.61,3.0]| 0.0|
| yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| 1.0|
| yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| 1.0|
| yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| 1.0|
| no|520.0|2.93| 4.0|[520.0,2.93,4.0]| 0.0|
+-----+-----+----+--------+----------------+-----+
+-----+-----+----+--------+----------------+
|admit| gre| gpa|prestige| features|
+-----+-----+----+--------+----------------+
| no|380.0|3.61| 3.0|[380.0,3.61,3.0]|
| yes|660.0|3.67| 3.0|[660.0,3.67,3.0]|
| yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]|
| yes|640.0|3.19| 4.0|[640.0,3.19,4.0]|
| no|520.0|2.93| 4.0|[520.0,2.93,4.0]|
+-----+-----+----+--------+----------------+
+-----+-----+----+--------+
|admit| gre| gpa|prestige|
+-----+-----+----+--------+
| no|380.0|3.61| 3.0|
| yes|660.0|3.67| 3.0|
| yes|800.0| 4.0| 1.0|
| yes|640.0|3.19| 4.0|
| no|520.0|2.93| 4.0|
+-----+-----+----+--------+
So a bit more about that pipeline
● Each of our previous components has “fit” & “transform”
stage
● Constructing the pipeline this way makes it easier to
work with (only need to call one fit & one transform)
● Can re-use the fitted model on future data
val model = pipeline.fit(df)
val prepared = model.transform(df)
Andrey
Let's train a model on our prepared data:
// Specify model
val dt = new DecisionTreeClassifier()
dt.setFeaturesCol("features")
dt.setPredictionCol("prediction")
// Fit it
val dtModel = dt.fit(prepared)
Or wait let's just add it to the pipeline:
// Specify model
val dt = new DecisionTreeClassifier()
dt.setFeaturesCol("features")
dt.setPredictionCol("prediction")
// Add to the pipeline
pipeline.setStages(Array(assembler, si, dt))
pipelineModel = pipeline.fit(df)
And predict the results on the same data:
pipelineModel.transform(df).select("prediction",
"label").take(20)
+----------+-----+
|prediction|label|
+----------+-----+
| 0.0| 0.0|
| 0.0| 1.0|
| 1.0| 1.0|
| 1.0| 1.0|
| 0.0| 0.0|
+----------+-----+

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Morel, a Functional Query Language
Morel, a Functional Query LanguageMorel, a Functional Query Language
Morel, a Functional Query Language
 
RDF data validation 2017 SHACL
RDF data validation 2017 SHACLRDF data validation 2017 SHACL
RDF data validation 2017 SHACL
 
Microsoft Data Platform - What's included
Microsoft Data Platform - What's includedMicrosoft Data Platform - What's included
Microsoft Data Platform - What's included
 
SHACL: Shaping the Big Ball of Data Mud
SHACL: Shaping the Big Ball of Data MudSHACL: Shaping the Big Ball of Data Mud
SHACL: Shaping the Big Ball of Data Mud
 
Tableau And Data Visualization - Get Started
Tableau And Data Visualization - Get StartedTableau And Data Visualization - Get Started
Tableau And Data Visualization - Get Started
 
Cloudera - The Modern Platform for Analytics
Cloudera - The Modern Platform for AnalyticsCloudera - The Modern Platform for Analytics
Cloudera - The Modern Platform for Analytics
 
SSAS Tabular model importance and uses
SSAS  Tabular model importance and usesSSAS  Tabular model importance and uses
SSAS Tabular model importance and uses
 
The chaology of markets
The chaology of marketsThe chaology of markets
The chaology of markets
 
Plotly dash and data visualisation in Python
Plotly dash and data visualisation in PythonPlotly dash and data visualisation in Python
Plotly dash and data visualisation in Python
 
Power BI Architecture
Power BI ArchitecturePower BI Architecture
Power BI Architecture
 
Separation of concerns - DPC12
Separation of concerns - DPC12Separation of concerns - DPC12
Separation of concerns - DPC12
 
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQLSql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
 
Tableau online training || Tableau Server
Tableau online training || Tableau ServerTableau online training || Tableau Server
Tableau online training || Tableau Server
 
Top 65 SQL Interview Questions and Answers | Edureka
Top 65 SQL Interview Questions and Answers | EdurekaTop 65 SQL Interview Questions and Answers | Edureka
Top 65 SQL Interview Questions and Answers | Edureka
 
Slowly changing dimension
Slowly changing dimension Slowly changing dimension
Slowly changing dimension
 
1. Apache HIVE
1. Apache HIVE1. Apache HIVE
1. Apache HIVE
 
Designing Scalable Data Warehouse Using MySQL
Designing Scalable Data Warehouse Using MySQLDesigning Scalable Data Warehouse Using MySQL
Designing Scalable Data Warehouse Using MySQL
 
Power BI Interview Questions and Answers | Power BI Certification | Power BI ...
Power BI Interview Questions and Answers | Power BI Certification | Power BI ...Power BI Interview Questions and Answers | Power BI Certification | Power BI ...
Power BI Interview Questions and Answers | Power BI Certification | Power BI ...
 
Tableau Architecture
Tableau ArchitectureTableau Architecture
Tableau Architecture
 
Data Structures In Scala
Data Structures In ScalaData Structures In Scala
Data Structures In Scala
 

Ähnlich wie Spark Machine Learning: Adding Your Own Algorithms and Tools with Holden Karau and Seth Hendrickson

Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark APIWriting Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark API
Databricks
 

Ähnlich wie Spark Machine Learning: Adding Your Own Algorithms and Tools with Holden Karau and Seth Hendrickson (20)

Extending spark ML for custom models now with python!
Extending spark ML for custom models  now with python!Extending spark ML for custom models  now with python!
Extending spark ML for custom models now with python!
 
Ml pipelines with Apache spark and Apache beam - Ottawa Reactive meetup Augus...
Ml pipelines with Apache spark and Apache beam - Ottawa Reactive meetup Augus...Ml pipelines with Apache spark and Apache beam - Ottawa Reactive meetup Augus...
Ml pipelines with Apache spark and Apache beam - Ottawa Reactive meetup Augus...
 
Introduction to and Extending Spark ML
Introduction to and Extending Spark MLIntroduction to and Extending Spark ML
Introduction to and Extending Spark ML
 
An introduction into Spark ML plus how to go beyond when you get stuck
An introduction into Spark ML plus how to go beyond when you get stuckAn introduction into Spark ML plus how to go beyond when you get stuck
An introduction into Spark ML plus how to go beyond when you get stuck
 
Introducing Apache Spark's Data Frames and Dataset APIs workshop series
Introducing Apache Spark's Data Frames and Dataset APIs workshop seriesIntroducing Apache Spark's Data Frames and Dataset APIs workshop series
Introducing Apache Spark's Data Frames and Dataset APIs workshop series
 
Holden Karau - Spark ML for Custom Models
Holden Karau - Spark ML for Custom ModelsHolden Karau - Spark ML for Custom Models
Holden Karau - Spark ML for Custom Models
 
Spark ML for custom models - FOSDEM HPC 2017
Spark ML for custom models - FOSDEM HPC 2017Spark ML for custom models - FOSDEM HPC 2017
Spark ML for custom models - FOSDEM HPC 2017
 
Writing Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark APIWriting Continuous Applications with Structured Streaming PySpark API
Writing Continuous Applications with Structured Streaming PySpark API
 
Analytics Metrics delivery and ML Feature visualization: Evolution of Data Pl...
Analytics Metrics delivery and ML Feature visualization: Evolution of Data Pl...Analytics Metrics delivery and ML Feature visualization: Evolution of Data Pl...
Analytics Metrics delivery and ML Feature visualization: Evolution of Data Pl...
 
Introduction to Spark Datasets - Functional and relational together at last
Introduction to Spark Datasets - Functional and relational together at lastIntroduction to Spark Datasets - Functional and relational together at last
Introduction to Spark Datasets - Functional and relational together at last
 
Intro to Spark and Spark SQL
Intro to Spark and Spark SQLIntro to Spark and Spark SQL
Intro to Spark and Spark SQL
 
Getting started with Apache Spark in Python - PyLadies Toronto 2016
Getting started with Apache Spark in Python - PyLadies Toronto 2016Getting started with Apache Spark in Python - PyLadies Toronto 2016
Getting started with Apache Spark in Python - PyLadies Toronto 2016
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQL
 
Beyond Wordcount with spark datasets (and scalaing) - Nide PDX Jan 2018
Beyond Wordcount  with spark datasets (and scalaing) - Nide PDX Jan 2018Beyond Wordcount  with spark datasets (and scalaing) - Nide PDX Jan 2018
Beyond Wordcount with spark datasets (and scalaing) - Nide PDX Jan 2018
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
 
ScalaTo July 2019 - No more struggles with Apache Spark workloads in production
ScalaTo July 2019 - No more struggles with Apache Spark workloads in productionScalaTo July 2019 - No more struggles with Apache Spark workloads in production
ScalaTo July 2019 - No more struggles with Apache Spark workloads in production
 
Spark DataFrames: Simple and Fast Analytics on Structured Data at Spark Summi...
Spark DataFrames: Simple and Fast Analytics on Structured Data at Spark Summi...Spark DataFrames: Simple and Fast Analytics on Structured Data at Spark Summi...
Spark DataFrames: Simple and Fast Analytics on Structured Data at Spark Summi...
 
4Developers 2018: Pyt(h)on vs słoń: aktualny stan przetwarzania dużych danych...
4Developers 2018: Pyt(h)on vs słoń: aktualny stan przetwarzania dużych danych...4Developers 2018: Pyt(h)on vs słoń: aktualny stan przetwarzania dużych danych...
4Developers 2018: Pyt(h)on vs słoń: aktualny stan przetwarzania dużych danych...
 
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...
 
Scylla Summit 2016: Analytics Show Time - Spark and Presto Powered by Scylla
Scylla Summit 2016: Analytics Show Time - Spark and Presto Powered by ScyllaScylla Summit 2016: Analytics Show Time - Spark and Presto Powered by Scylla
Scylla Summit 2016: Analytics Show Time - Spark and Presto Powered by Scylla
 

Mehr von Databricks

Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 

Mehr von Databricks (20)

DW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptxDW Migration Webinar-March 2022.pptx
DW Migration Webinar-March 2022.pptx
 
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 1
 
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 1 | Part 2
 
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 2
 
Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4Data Lakehouse Symposium | Day 4
Data Lakehouse Symposium | Day 4
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Learn to Use Databricks for Data Science
Learn to Use Databricks for Data ScienceLearn to Use Databricks for Data Science
Learn to Use Databricks for Data Science
 
Why APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML MonitoringWhy APM Is Not the Same As ML Monitoring
Why APM Is Not the Same As ML Monitoring
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch FixThe Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
 
Stage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI IntegrationStage Level Scheduling Improving Big Data and AI Integration
Stage Level Scheduling Improving Big Data and AI Integration
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorchSimplify Data Conversion from Spark to TensorFlow and PyTorch
Simplify Data Conversion from Spark to TensorFlow and PyTorch
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on KubernetesScaling your Data Pipelines with Apache Spark on Kubernetes
Scaling your Data Pipelines with Apache Spark on Kubernetes
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark PipelinesScaling and Unifying SciKit Learn and Apache Spark Pipelines
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
 
Sawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature AggregationsSawtooth Windows for Feature Aggregations
Sawtooth Windows for Feature Aggregations
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen SinkRedis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
 
Re-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and SparkRe-imagine Data Monitoring with whylogs and Spark
Re-imagine Data Monitoring with whylogs and Spark
 
Raven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction QueriesRaven: End-to-end Optimization of ML Prediction Queries
Raven: End-to-end Optimization of ML Prediction Queries
 
Processing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache SparkProcessing Large Datasets for ADAS Applications using Apache Spark
Processing Large Datasets for ADAS Applications using Apache Spark
 
Massive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta LakeMassive Data Processing in Adobe Using Delta Lake
Massive Data Processing in Adobe Using Delta Lake
 

Kürzlich hochgeladen

Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
nirzagarg
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
ranjankumarbehera14
 
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling ManjurJual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
ptikerjasaptiker
 
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
wsppdmt
 
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi ArabiaIn Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
ahmedjiabur940
 
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
q6pzkpark
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
gajnagarg
 
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
vexqp
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
nirzagarg
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
vexqp
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Klinik kandungan
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
wsppdmt
 

Kürzlich hochgeladen (20)

Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Hapur [ 7014168258 ] Call Me For Genuine Models We ...
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24  Building Real-Time Pipelines With FLaNKDATA SUMMIT 24  Building Real-Time Pipelines With FLaNK
DATA SUMMIT 24 Building Real-Time Pipelines With FLaNK
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
 
Ranking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRanking and Scoring Exercises for Research
Ranking and Scoring Exercises for Research
 
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling ManjurJual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
Jual Cytotec Asli Obat Aborsi No. 1 Paling Manjur
 
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
一比一原版(UCD毕业证书)加州大学戴维斯分校毕业证成绩单原件一模一样
 
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi ArabiaIn Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
In Riyadh ((+919101817206)) Cytotec kit @ Abortion Pills Saudi Arabia
 
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
Top profile Call Girls In Chandrapur [ 7014168258 ] Call Me For Genuine Model...
 
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
怎样办理纽约州立大学宾汉姆顿分校毕业证(SUNY-Bin毕业证书)成绩单学校原版复制
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
怎样办理圣路易斯大学毕业证(SLU毕业证书)成绩单学校原版复制
 
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
Jual obat aborsi Bandung ( 085657271886 ) Cytote pil telat bulan penggugur ka...
 
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
如何办理英国诺森比亚大学毕业证(NU毕业证书)成绩单原件一模一样
 
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book nowVadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
Vadodara 💋 Call Girl 7737669865 Call Girls in Vadodara Escort service book now
 

Spark Machine Learning: Adding Your Own Algorithms and Tools with Holden Karau and Seth Hendrickson

  • 1. Extending Spark ML Estimators and Transformers kroszk@ Built with public APIs* *Scala only - see developer for details.
  • 2. Holden: ● My name is Holden Karau ● Prefered pronouns are she/her ● I’m a Principal Software Engineer at IBM’s Spark Technology Center ● Apache Spark committer (as of January!) :) ● previously Alpine, Databricks, Google, Foursquare & Amazon ● co-author of Learning Spark & Fast Data processing with Spark ○ co-author of a new book focused on Spark performance coming this year* ● @holdenkarau ● Slide share http://www.slideshare.net/hkarau ● Linkedin https://www.linkedin.com/in/holdenkarau ● Github https://github.com/holdenk ● Spark Videos http://bit.ly/holdenSparkVideos
  • 3.
  • 4. Seth: ● Data Scientist at Cloudera ● Previously machine learning engineer at IBM’s Spark Technology Center ● Two years contributing to Spark MLlib ● Twitter: @shendrickson16 ● Linkedin https://www.linkedin.com/in/sethah ● Github https://github.com/sethah ● SlideShare http://www.slideshare.net/SethHendrickson
  • 5. Spark Technology Center 5 IBM Spark Technology Center Founded in 2015. Location: Physical: 505 Howard St., San Francisco CA Web: http://spark.tc Twitter: @apachespark_tc Mission: Contribute intellectual and technical capital to the Apache Spark community. Make the core technology enterprise- and cloud-ready. Build data science skills to drive intelligence into business applications — http://bigdatauniversity.com Key statistics: About 50 developers, co-located with 25 IBM designers. Major contributions to Apache Spark http://jiras.spark.tc Apache SystemML is now an Apache Incubator project. Founding member of UC Berkeley AMPLab and RISE Lab Member of R Consortium and Scala Center Spark Technology Center
  • 6. Who I think you wonderful humans are? ● Nice enough people ● Don’t mind pictures of cats ● Might know some Apache Spark ● Possibly know some Scala ● Think machine learning is kind of cool ● Don’t overly mind a grab-bag of topics Lori Erickson
  • 7. What are we going to talk about? ● What Spark ML pipelines look like ● What Estimators and Transformers are ● How to implement both of them ● What tools can help us ● Publishing your fancy new Spark model so other’s (like me) can use it! ● Holden will of course try and sell you many copies of her new book if you have an expense account.
  • 8. Loading data Spark SQL (DataSets) sparkSession.read returns a DataFrameReader We can specify general properties & data specific options ● option(“key”, “value”) ○ spark-csv ones we will use are header & inferSchema ● format(“formatName”) ○ built in formats include parquet, jdbc, etc. ● load(“path”) Jess Johnson
  • 9. Loading some simple CSV data val df = spark.read .option("inferSchema", "true") .option("delimiter", ";") .format("csv") .load("hdfs:///user/data/admissions.csv") Jess Johnson
  • 10. Spark ML Pipelines Pipeline Stage ?data Pipeline Stage Pipeline Stage Pipeline Stage ... Pipeline
  • 11. Spark ML Pipelines Pipeline Stage ?data Pipeline Stage Pipeline Stage Pipeline Stage ... Pipeline data ? Also a pipeline stage!
  • 12. Two main types of pipeline stages Pipeline Stage ?data Transformer Estimatordata data data transformer
  • 13. Pipelines are estimators Pipeline data model Also an estimator! Transformer Transformer Estimator
  • 14. PipelineModels are transformers PipelineModel data data Also a transformer! Transformer Transformer Transformer
  • 15. How are transformers made? Estimator data class Estimator extends PipelineStage { def fit(dataset: Dataset[_]): Transformer = { // magic happens here } } Transformer
  • 16. How is new data made? Transformer ( data ) class Transformer extends PipelineStage { def transform(df: Dataset[_]): DataFrame } new data.transform
  • 17. Feature transformations +-----+-----+----+--------+ |admit| gre| gpa|prestige| +-----+-----+----+--------+ | no|380.0|3.61| 3.0| | yes|660.0|3.67| 3.0| | yes|800.0| 4.0| 1.0| | yes|640.0|3.19| 4.0| | no|520.0|2.93| 4.0| +-----+-----+----+--------+ val assembler = new VectorAssembler() .setInputCols(Array("gre", "gpa", "prestige")) val df2 = assembler.transform(df) VectorAssembler +-----+-----+----+--------+----------------+ |admit| gre| gpa|prestige| features| +-----+-----+----+--------+----------------+ | no|380.0|3.61| 3.0|[380.0,3.61,3.0]| | yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| | yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| | yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| | no|520.0|2.93| 4.0|[520.0,2.93,4.0]| +-----+-----+----+--------+----------------+
  • 18. Train a classifier on the transformed data StringIndexer StringIndexerModel val si = new StringIndexer().setInputCol("admit").setOutputCol("label") val siModel = si.fit(df2) val df3 = siModel.transform(df2) +-----+-----+----+--------+----------------+ |admit| gre| gpa|prestige| features| +-----+-----+----+--------+----------------+ | no|380.0|3.61| 3.0|[380.0,3.61,3.0]| | yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| | yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| | yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| | no|520.0|2.93| 4.0|[520.0,2.93,4.0]| +-----+-----+----+--------+----------------+ +-----+-----+----+--------+----------------+-----+ |admit| gre| gpa|prestige| features|label| +-----+-----+----+--------+----------------+-----+ | no|380.0|3.61| 3.0|[380.0,3.61,3.0]| 0.0| | yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| 1.0| | yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| 1.0| | yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| 1.0| | no|520.0|2.93| 4.0|[520.0,2.93,4.0]| 0.0| +-----+-----+----+--------+----------------+-----+
  • 19. Train a classifier on the transformed data +----------------+-----+ | features|label| +----------------+-----+ |[380.0,3.61,3.0]| 0.0| |[660.0,3.67,3.0]| 1.0| | [800.0,4.0,1.0]| 1.0| |[640.0,3.19,4.0]| 1.0| |[520.0,2.93,4.0]| 0.0| +----------------+-----+ DecisionTreeClassifier DecisionTree ClassificationModel +----------------+-----+----------+ | features|label|prediction| +----------------+-----+----------+ |[380.0,3.61,3.0]| 0.0| 0.0| |[660.0,3.67,3.0]| 1.0| 0.0| | [800.0,4.0,1.0]| 1.0| 1.0| |[640.0,3.19,4.0]| 1.0| 1.0| |[520.0,2.93,4.0]| 0.0| 0.0| +----------------+-----+----------+ val dt = new DecisionTreeClassifier() val dtModel = dt.fit(df3) val df4 = dtModel.transform(df3)
  • 20. Or just throw it all in a pipeline ● Keeping track of intermediate data and calling fit/transform on every stage is way too much work ● This problem is worse when more stages are used ● Use a pipeline instead! val assembler = new VectorAssembler() assembler.setInputCols(Array("gre", "gpa", "prestige")) val sb = new StringIndexer() sb.setInputCol("admit").setOutputCol("label") val dt = new DecisionTreeClassifier() val pipeline = new Pipeline() pipeline.setStages(Array(assembler, sb, dt)) val pipelineModel = pipeline.fit(df)
  • 21. Yay! You have an ML pipeline! Photo by Jessica Fiess-Hill
  • 22. Pipeline API has many models: ● org.apache.spark.ml.classification ○ BinaryLogisticRegressionClassification, DecissionTreeClassification, GBTClassifier, etc. ● org.apache.spark.ml.regression ○ DecissionTreeRegression, GBTRegressor, IsotonicRegression, LinearRegression, etc. ● org.apache.spark.ml.recommendation ○ ALS ● You can also check out spark-packages for some more ● But possible not your special AwesomeFooBazinatorML PROcarterse Follow
  • 23. & data prep stages... ● org.apache.spark.ml.feature ○ ~30 elements from VectorAssembler to Tokenizer, to PCA, etc. ● Often simpler to understand while getting started with building our own stages PROcarterse Follow
  • 24. So now begins our adventure to add stages
  • 25. So what does a pipeline stage look like? Must provide: ● transformSchema (used to validate input schema is reasonable) & copy Often have: ● Special params for configuration (so we can do meta-algorithms) Wendy Piersall
  • 26. Building a simple transformer: class HardCodedWordCountStage(override val uid: String) extends Transformer { def this() = this(Identifiable.randomUID("hardcodedwordcount")) def copy(extra: ParamMap): HardCodedWordCountStage = { defaultCopy(extra) } ... } Not to be confused with the Transformers franchise from Hasbro and Tomy.
  • 27. Verify the input schema is reasonable: override def transformSchema(schema: StructType): StructType = { // Check that the input type is a string val idx = schema.fieldIndex("happy_pandas") val field = schema.fields(idx) if (field.dataType != StringType) { throw new Exception(s"Input type ${field.dataType} did not match input type StringType") } // Add the return field schema.add(StructField("happy_panda_counts", IntegerType, false)) }
  • 28. How is transformSchema used? ● When you call fit on a pipeline it calls transformSchema on the pipeline stages in order ● This is used to verify that things should work ● Ideally allows pipelines to fail fast when misconfigured, instead of at the final stage of a 48-hour process ● Doesn’t always work that way :p Tricia Hall
  • 29. Do the “work” (e.g. predict labels or w/e): def transform(df: Dataset[_]): DataFrame = { val wordcount = udf { in: String => in.split(" ").size } df.select(col("*"), wordcount(df.col("happy_pandas")).as("happy_panda_counts")) } vic15
  • 30. What about configuring our stage? class ConfigurableWordCount(override val uid: String) extends Transformer { final val inputCol= new Param[String](this, "inputCol", "The input column") final val outputCol = new Param[String](this, "outputCol", "The output column") def setInputCol(value: String): this.type = set(inputCol, value) def setOutputCol(value: String): this.type = set(outputCol, value) Jason Wesley Upton
  • 31. So why do we configure it that way? ● Allow meta algorithms to work on it ● If you look inside of spark you’ll see “sharedParams” for common params (like input column) ● We can’t access those unless we pretend to be inside of org.apache.spark - so we have to make our own Tricia Hall
  • 32. So how to make an estimator? ● Very similar, instead of directly providing transform provide a `fit` which returns a “model” which implements the estimator interface as shown above ● Also take a look at the algorithms in Spark itself (helpful traits you can mixin to take care of many common things). ● Let’s look at a simple one now! sneakerdog
  • 33. A simple string indexer estimator class SimpleIndexer(override val uid: String) extends Estimator[SimpleIndexerModel] with SimpleIndexerParams { …. override def fit(dataset: Dataset[_]): SimpleIndexerModel = { import dataset.sparkSession.implicits._ val words = dataset.select(dataset($(inputCol)).as[String]).distinct .collect() new SimpleIndexerModel(uid, words) } }
  • 34. Quick aside: What’ts that “$(inputCol)”? ● How you get access to a configuration parameter ● Inside stage only (external use getInputCol just like Java™ :p)
  • 35. And our friend the transformer is back: class SimpleIndexerModel( override val uid: String, words: Array[String]) extends Model[SimpleIndexerModel] with SimpleIndexerParams { ... private val labelToIndex: Map[String, Double] = words.zipWithIndex. map{case (x, y) => (x, y.toDouble)}.toMap override def transform(dataset: Dataset[_]): DataFrame = { val indexer = udf { label: String => labelToIndex(label) } dataset.select(col("*"), indexer(dataset($(inputCol)).cast(StringType)).as($(outputCol))) Still not to be confused with the Transformers franchise from Hasbro and Tomy.
  • 36. Ok so how do you make the train function? ● Read some papers on the algorithm(s) you care about ● Most likely some iterative approach (pro-tip: RDDs > Datasets for iterative) ○ Seth has some interesting work around pluggable optimizers ● Closed form solution? Go have a party!
  • 37. What else can you add to your models? ● Put in an ML pipeline ● Do hyper-parameter tuning And if you have some coffee left over: ● Persistence* ○ MLWriter & MLReader give you the basics ○ You’ll have to do a lot of work yourself :( ● Serving* *With enough coffee. Not guaranteed.
  • 38. Ok so I put my new fancy thing on GitHub ● Yay thank you! ● Please publish to maven central ● Also consider listing on spark-packages + user@ list ○ Let me know ( holden@pigscanfly.ca ) :) ● Think of the Python users (and I guess the R users) too?
  • 39. Custom Estimators/Transformers in the Wild Classification/Regression xgboost Deep Learning! MXNet Feature Transformation FeatureHasher
  • 40. More resources: ● High Performance Spark Example Repo has some sample models ○ Of course buy several copies of the book - it is the gift of the season :p ● The models inside of Spark itself (use some internal APIs but a good starting point) ● Nick Pentreath’s FeatureHasher ● O’Reilly radar blog post https://www.oreilly.com/learning/extend-structured-streami ng-for-spark-ml Captain Pancakes
  • 41. Learning Spark Fast Data Processing with Spark (Out of Date) Fast Data Processing with Spark (2nd edition) Advanced Analytics with Spark Spark in Action Coming soon: High Performance Spark Learning PySpark
  • 42. The next book….. Available in “Early Release”*: ● Buy from O’Reilly - http://bit.ly/highPerfSpark ● Extending ML is covered in Chapter 9 Get notified when updated & finished: ● http://www.highperformancespark.com ● https://twitter.com/highperfspark ● Should be finished between May 22nd ~ June 18th :D * Early Release means extra mistakes, but also a chance to help us make a more awesome book.
  • 43. And some upcoming talks: ● June ○ Berlin Buzzwords ○ Scala Swarm (Porto, Portugal)
  • 44. k thnx bye :) If you care about Spark testing and don’t hate surveys: http://bit.ly/holdenTestingSpark Will tweet results “eventually” @holdenkarau Any PySpark Users: Have some simple UDFs you wish ran faster you are willing to share?: http://bit.ly/pySparkUDF Pssst: Have feedback on the presentation? Give me a shout (holden@pigscanfly.ca) if you feel comfortable doing so :)
  • 46. Cross-validation because saving a test set is effort ● Automagically* fit your model params ● Because thinking is effort ● org.apache.spark.ml.tuning has the tools Jonathan Kotta
  • 47. Cross-validation because saving a test set is effort & a reason to integrate // ParamGridBuilder constructs an Array of parameter combinations. val paramGrid: Array[ParamMap] = new ParamGridBuilder() .addGrid(nb.smoothing, Array(0.1, 0.5, 1.0, 2.0)) .build() val cv = new CrossValidator() .setEstimator(pipeline) .setEstimatorParamMaps(paramGrid) val cvModel = cv.fit(df) val bestModel = cvModel.bestModel Jonathan Kotta
  • 48. So what does a pipeline stage look like? Are either an: ● Estimator - has a method called “fit” which returns a Transformer (e.g. NaiveBayes, etc.) ● Transformer - no need to train can directly transform (e.g. HashingTF, VectorAssembler, etc.) (with transform) Wendy Piersall
  • 49. We’ve left out a lot of “transformSchema”... ● It is necessary (but I’m lazy) ● But there are helper classes that can implement some of the boiler plate we’ve been skipping ● Classifier & Estimator base classes are your friends ● They provide transformSchema
  • 50. Let’s make a Classifier* :) // Example only - not for production use. class SimpleNaiveBayes(val uid: String) extends Classifier[Vector, SimpleNaiveBayes, SimpleNaiveBayesModel] { Input type Trained Model
  • 51. Let’s make a Classifier* :) override def train(ds: Dataset[_]): SimpleNaiveBayesModel = { import ds.sparkSession.implicits._ ds.cache() …. … …. }
  • 52. If you reallllly want to see inside the ...s (1/5) // Get the number of features by peaking at the first row val numFeatures: Integer = ds.select(col($(featuresCol))).head .get(0).asInstanceOf[Vector].size // Determine the number of records for each class val groupedByLabel = ds.select(col($(labelCol)).as[Double]).groupByKey(x => x) val classCounts = groupedByLabel.agg(count("*").as[Long]) .sort(col("value")).collect().toMap // Select the labels and features so we can more easily map over them. // Note: we do this as a DataFrame using the untyped API because the Vector // UDT is no longer public. val df = ds.select(col($(labelCol)).cast(DoubleType), col($(featuresCol)))
  • 53. If you reallllly want to see inside the ...s (2/5) // Note: you can use getNumClasses & extractLabeledPoints to get an RDD instead // Using the RDD approach is common when integrating with legacy machine learning code // or iterative algorithms which can create large query plans. // Here we use `Datasets` since neither of those apply. // Compute the number of documents val numDocs = ds.count // Get the number of classes. // Note this estimator assumes they start at 0 and go to numClasses val numClasses = getNumClasses(ds)
  • 54. If you reallllly want to see inside the ...s (3/5) // Figure out the non-zero frequency of each feature for each label and // output label index pairs using a case clas to make it easier to work with. val labelCounts: Dataset[LabeledToken] = df.flatMap { case Row(label: Double, features: Vector) => features.toArray.zip(Stream from 1) .filter{vIdx => vIdx._2 == 1.0} .map{case (v, idx) => LabeledToken(label, idx)} } // Use the typed Dataset aggregation API to count the number of non-zero // features for each label-feature index. val aggregatedCounts: Array[((Double, Integer), Long)] = labelCounts .groupByKey(x => (x.label, x.index)) .agg(count("*").as[Long]).collect() val theta = Array.fill(numClasses)(new Array[Double](numFeatures))
  • 55. If you reallllly want to see inside the ...s (4/5) // Compute the denominator for the general prioirs val piLogDenom = math.log(numDocs + numClasses) // Compute the priors for each class val pi = classCounts.map{case(_, cc) => math.log(cc.toDouble) - piLogDenom }.toArray // For each label/feature update the probabilities aggregatedCounts.foreach{case ((label, featureIndex), count) => // log of number of documents for this label + 2.0 (smoothing) val thetaLogDenom = math.log( classCounts.get(label).map(_.toDouble).getOrElse(0.0) + 2.0) theta(label.toInt)(featureIndex) = math.log(count + 1.0) - thetaLogDenom } // Unpersist now that we are done computing everything ds.unpersist()
  • 56. If you reallllly want to see inside the ...s (5/5) // Construct a model new SimpleNaiveBayesModel(uid, numClasses, numFeatures, Vectors.dense(pi), new DenseMatrix(numClasses, theta(0).length, theta.flatten, true)) } override def copy(extra: ParamMap) = { defaultCopy(extra) } }
  • 57. What is Spark? ● General purpose distributed system ○ With a really nice API including Python :) ● Apache project (one of the most active) ● Much faster than Hadoop Map/Reduce ● Good when data is too big for a single machine ● Built on top of two abstractions for distributed data: RDDs & Datasets
  • 58. DataFrames & Datasets Totally the future ● Distributed collection ● Recomputed on node failure ● Distributes data & work across the cluster ● Lazily evaluated (transformations & actions) ● Has runtime schema information ● Allows for relational queries & supports SQL ● Declarative - many optimizations applied automagically ● Input for Spark Machine Learning Helen Olney
  • 59. What is the performance like? Andrew Skudder
  • 60. Spark ML pipelines Tokenizer HashingTF String Indexer Naive Bayes Tokenizer HashingTF String Indexer Naive Bayes fit(df) Estimator Transformer ● Consist of different stages (estimators or transformers) ● Themselves are an estimator We are going to build a stage together!
  • 61. Minimal data prep: ● At a minimum most algorithms in Spark work on feature vectors of doubles (and if labeled - doubles too) Imports: import org.apache.spark.ml._ import org.apache.spark.ml.feature._ import org.apache.spark.ml.classification._ import org.apache.spark.ml.linalg.{Vector => SparkVector} Huang Yun Chung
  • 62. Minimal prep continued // Combines a list of double input features into a vector val assembler = new VectorAssembler() assembler.setInputCols(Array("age", "education-num")) // String indexer converts a set of strings into doubles val sb = new StringIndexer() sb.setInputCol("category").setOutputCol("category-index") // Can be used to combine pipeline components together val pipeline = new Pipeline() pipeline.setStages(Array(assembler, sb)) Huang Yun Chung
  • 63. Minimal prep continued val assembler = new VectorAssembler() assembler.setInputCols(Array("gre", "gpa", "prestige")) val si = new StringIndexer() si.setInputCol("admit").setOutputCol("label") val pipeline = new Pipeline() pipeline.setStages(Array(assembler, si)) Huang Yun Chung +-----+-----+----+--------+----------------+-----+ |admit| gre| gpa|prestige| features|label| +-----+-----+----+--------+----------------+-----+ | no|380.0|3.61| 3.0|[380.0,3.61,3.0]| 0.0| | yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| 1.0| | yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| 1.0| | yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| 1.0| | no|520.0|2.93| 4.0|[520.0,2.93,4.0]| 0.0| +-----+-----+----+--------+----------------+-----+ +-----+-----+----+--------+----------------+ |admit| gre| gpa|prestige| features| +-----+-----+----+--------+----------------+ | no|380.0|3.61| 3.0|[380.0,3.61,3.0]| | yes|660.0|3.67| 3.0|[660.0,3.67,3.0]| | yes|800.0| 4.0| 1.0| [800.0,4.0,1.0]| | yes|640.0|3.19| 4.0|[640.0,3.19,4.0]| | no|520.0|2.93| 4.0|[520.0,2.93,4.0]| +-----+-----+----+--------+----------------+ +-----+-----+----+--------+ |admit| gre| gpa|prestige| +-----+-----+----+--------+ | no|380.0|3.61| 3.0| | yes|660.0|3.67| 3.0| | yes|800.0| 4.0| 1.0| | yes|640.0|3.19| 4.0| | no|520.0|2.93| 4.0| +-----+-----+----+--------+
  • 64. So a bit more about that pipeline ● Each of our previous components has “fit” & “transform” stage ● Constructing the pipeline this way makes it easier to work with (only need to call one fit & one transform) ● Can re-use the fitted model on future data val model = pipeline.fit(df) val prepared = model.transform(df) Andrey
  • 65. Let's train a model on our prepared data: // Specify model val dt = new DecisionTreeClassifier() dt.setFeaturesCol("features") dt.setPredictionCol("prediction") // Fit it val dtModel = dt.fit(prepared)
  • 66. Or wait let's just add it to the pipeline: // Specify model val dt = new DecisionTreeClassifier() dt.setFeaturesCol("features") dt.setPredictionCol("prediction") // Add to the pipeline pipeline.setStages(Array(assembler, si, dt)) pipelineModel = pipeline.fit(df)
  • 67. And predict the results on the same data: pipelineModel.transform(df).select("prediction", "label").take(20) +----------+-----+ |prediction|label| +----------+-----+ | 0.0| 0.0| | 0.0| 1.0| | 1.0| 1.0| | 1.0| 1.0| | 0.0| 0.0| +----------+-----+