This document summarizes a talk given by Chris Fregly on Spark SQL and DataFrames. The talk covered topics like Catalyst optimizer and query plans, the Data Sources API, creating and contributing custom data sources, partitions, pruning, pushdowns, native and third-party data sources in Spark SQL, and Spark SQL performance tuning. It also promoted an upcoming meetup series on advanced Apache Spark topics.
1. IBM | spark.tc
Scotland Data Science Meetup
Spark SQL + DataFrames + Catalyst + Data Sources API
Chris Fregly, Principal Data Solutions Engineer
IBM Spark Technology Center
Oct 13, 2015
Power of data. Simplicity of design. Speed of innovation.
3. IBM | spark.tc
Who am I?! !
Streaming Data Engineer!
NetïŹix Open Source Committer!
!
Data Solutions Engineer!
Apache Contributor!
!
Principal Data Solutions Engineer!
IBM Technology Center!
Meetup Organizer!
Advanced Apache Meetup!
Book Author!
Advanced Spark (2016)!
4. IBM | spark.tc
meetup.com/Advanced-Apache-Spark-Meetup/!
Total Spark Experts: 1200+ in only 3 mos!!
#5 most active Spark Meetup in the world!!
!
Goals!
Dig deep into the Spark & extended-Spark codebase!
!
Study integrations such as Cassandra, ElasticSearch,!
Tachyon, S3, BlinkDB, Mesos, YARN, Kafka, R, etc!
!
Surface and share the patterns and idioms of these !
well-designed, distributed, big data components!
5. IBM | spark.tc
Recent Events
Cassandra Summit 2015!
Real-time Advanced Analytics w/ Spark & Cassandra!
!
!
!
Strata NYC 2015!
Practical Data Science w/ Spark: Recommender Systems!
!
All Slides Available on !
Slideshare!
http://slideshare.net/cfregly!
6. IBM | spark.tc
Upcoming Advanced Apache Spark Meetups!
Project Tungsten Data Structs/Algos for CPU/Memory Optimization!
Nov 12th, 2015!
Text-based Advanced Analytics and Machine Learning!
Jan 14th, 2016!
ElasticSearch-Spark Connector w/ Costin Leau (Elastic.co) & Me!
Feb 16th, 2016!
Spark Internals Deep Dive!
Mar 24th, 2016!
Spark SQL Catalyst Optimizer Deep Dive !
Apr 21st, 2016!
7. IBM | spark.tc
Freg-a-palooza Upcoming World Tour
⯠London Spark Meetup (Oct 12th)!
⯠Scotland Data Science Meetup (Oct 13th)!
⯠Dublin Spark Meetup (Oct 15th)!
⯠Barcelona Spark Meetup (Oct 20th)!
⯠Madrid Spark/Big Data Meetup (Oct 22nd)!
⯠Paris Spark Meetup (Oct 26th)!
⯠Amsterdam Spark Summit (Oct 27th â Oct 29th)!
⯠Delft Dutch Data Science Meetup (Oct 29th) !
⯠Brussels Spark Meetup (Oct 30th)!
⯠Zurich Big Data Developers Meetup (Nov 2nd)!
High probability!
Iâll end up in jail!
or married!!
8. IBM | spark.tc
Slides and Videos
Slides!
Links posted in Meetup directly!
!
Videos!
Most talks are live streamed and/or video recorded!
Links posted in Meetup directly!
!
All Slides Available on Slideshare!
http://slideshare.net/cfregly!
9. IBM | spark.tc
Last Meetup (Spark Wins 100 TB Daytona GraySort)
On-disk only, in-memory caching disabled!!sortbenchmark.org/ApacheSpark2014.pdf!
10. Spark SQL + DataFrames
Catalyst + Data Sources API
11. IBM | spark.tc
Topics of this Talk!
âŻDataFrames!
âŻCatalyst Optimizer and Query Plans!
âŻData Sources API!
âŻCreating and Contributing Custom Data Source!
!
âŻPartitions, Pruning, Pushdowns!
!
âŻNative + Third-Party Data Source Impls!
!
âŻSpark SQL Performance Tuning!
12. IBM | spark.tc
DataFrames!
Inspired by R and Pandas DataFrames!
Cross language support!
SQL, Python, Scala, Java, R!
Levels performance of Python, Scala, Java, and R!
Generates JVM bytecode vs serialize/pickle objects to Python!
DataFrame is Container for Logical Plan!
Transformations are lazy and represented as a tree!
Catalyst Optimizer creates physical plan!
DataFrame.rdd returns the underlying RDD if needed!
Custom UDF using registerFunction()
New, experimental UDAF support!
Use DataFrames !
instead of RDDs!!!
13. IBM | spark.tc
Catalyst Optimizer!
Converts logical plan to physical plan!
Manipulate & optimize DataFrame transformation tree!
Subquery elimination â use aliases to collapse subqueries!
Constant folding â replace expression with constant!
Simplify ïŹlters â remove unnecessary ïŹlters!
Predicate/ïŹlter pushdowns â avoid unnecessary data load!
Projection collapsing â avoid unnecessary projections!
Hooks for custom rules!
Rules = Scala Case Classes!
val newPlan = MyFilterRule(analyzedPlan)
Implements!
oas.sql.catalyst.rules.Rule!
Apply to any
plan stage!
14. IBM | spark.tc
Plan Debugging!
gendersCsvDF.select($"id", $"gender").ïŹlter("gender != 'F'").ïŹlter("gender != 'M'").explain(true)!
Requires explain(true)!
DataFrame.queryExecution.logical!
DataFrame.queryExecution.analyzed!
DataFrame.queryExecution.optimizedPlan!
DataFrame.queryExecution.executedPlan!
15. IBM | spark.tc
Plan Visualization & Join/Aggregation Metrics!
Effectiveness !
of Filter!
Cost-based !
Optimization!
is Applied!
Peak Memory for!
Joins and Aggs!
Optimized !
CPU-cache-aware!
Binary Format!
Minimizes GC &!
Improves Join Perf!
(Project Tungsten)!
New in Spark 1.5!!
16. IBM | spark.tc
Data Sources API!
Relations (o.a.s.sql.sources.interfaces.scala)!
BaseRelation (abstract class): Provides schema of data!
TableScan (impl): Read all data from source, construct rows !
PrunedFilteredScan (impl): Read with column pruning & predicate pushdowns
InsertableRelation (impl): Insert or overwrite data based on SaveMode enum!
RelationProvider (trait/interface): Handles user options, creates BaseRelation!
Execution (o.a.s.sql.execution.commands.scala)!
RunnableCommand (trait/interface)!
ExplainCommand(impl: case class)!
CacheTableCommand(impl: case class)!
Filters (o.a.s.sql.sources.ïŹlters.scala)!
Filter (abstract class for all ïŹlter pushdowns for this data source)!
EqualTo (impl)!
GreaterThan (impl)!
StringStartsWith (impl)!
17. IBM | spark.tc
Creating a Custom Data Source!
Study Existing Native and Third-Party Data Source Impls!
!
Native: JDBC (o.a.s.sql.execution.datasources.jdbc)!
class JDBCRelation extends BaseRelation
with PrunedFilteredScan
with InsertableRelation
!
Third-Party: Cassandra (o.a.s.sql.cassandra)!
class CassandraSourceRelation extends BaseRelation
with PrunedFilteredScan
with InsertableRelation!
!
18. IBM | spark.tc
Contributing a Custom Data Source!
spark-packages.org!
Managed by!
Contains links to externally-managed github projects!
Ratings and comments!
Spark version requirements of each package!
Examples!
https://github.com/databricks/spark-csv!
https://github.com/databricks/spark-avro!
https://github.com/databricks/spark-redshift!
20. IBM | spark.tc
Demo Dataset (from previous Spark After Dark talks)!
RATINGS !
========!
UserID,ProïŹleID,Rating !
(1-10)!
GENDERS!
========!
UserID,Gender !
(M,F,U)!
<-- Totally -->!
Anonymous !
21. IBM | spark.tc
Partitions!
Partition based on data usage patterns!
/genders.parquet/gender=M/âŠ
/gender=F/⊠<-- Use case: access users by gender
/gender=U/âŠ
Partition Discovery!
On read, infer partitions from organization of data (ie. gender=F)!
Dynamic Partitions!
Upon insert, dynamically create partitions!
Specify ïŹeld to use for each partition (ie. gender)!
SQL: INSERT TABLE genders PARTITION (gender) SELECT âŠ
DF: gendersDF.write.format(âparquet").partitionBy(âgenderâ).save(âŠ)
22. IBM | spark.tc
Pruning!
Partition Pruning!
Filter out entire partitions of rows on partitioned data
SELECT id, gender FROM genders where gender = âUâ
Column Pruning!
Filter out entire columns for all rows if not required!
Extremely useful for columnar storage formats!
Parquet, ORC!
SELECT id, gender FROM genders
!
23. IBM | spark.tc
Pushdowns!
Predicate (aka Filter) Pushdowns!
Predicate returns {true, false} for a given function/condition!
Filters rows as deep into the data source as possible!
Data Source must implement PrunedFilteredScan!
31. IBM | spark.tc
CSV Data Source (Databricks)!
Github!
https://github.com/databricks/spark-csv!
!
Maven!
com.databricks:spark-csv_2.10:1.2.0!
!
Code!
val gendersCsvDF = sqlContext.read
.format("com.databricks.spark.csv")
.load("file:/root/pipeline/datasets/dating/gender.csv.bz2")
.toDF("id", "gender") toDF() deïŹnes column names!
32. IBM | spark.tc
Avro Data Source (Databricks)!
Github!
https://github.com/databricks/spark-avro!
!
Maven!
com.databricks:spark-avro_2.10:2.0.1!
!
Code!
val df = sqlContext.read
.format("com.databricks.spark.avro")
.load("file:/root/pipeline/datasets/dating/gender.avro")
!
33. IBM | spark.tc
ElasticSearch Data Source (Elastic.co)!
Github!
https://github.com/elastic/elasticsearch-hadoop!
Maven!
org.elasticsearch:elasticsearch-spark_2.10:2.1.0!
Code!
val esConfig = Map("pushdown" -> "true", "es.nodes" -> "<hostname>",
"es.port" -> "<port>")
df.write.format("org.elasticsearch.spark.sqlâ).mode(SaveMode.Overwrite)
.options(esConfig).save("<index>/<document>")
34. IBM | spark.tc
Cassandra Data Source (DataStax)!
Github!
https://github.com/datastax/spark-cassandra-connector!
Maven!
com.datastax.spark:spark-cassandra-connector_2.10:1.5.0-M1
Code!
ratingsDF.write
.format("org.apache.spark.sql.cassandra")
.mode(SaveMode.Append)
.options(Map("keyspace"->"<keyspace>",
"table"->"<table>")).save(âŠ)
35. IBM | spark.tc
Cassandra Pushdown Rules!
Determines which ïŹlter predicates can be pushed down to Cassandra.!
* 1. Only push down no-partition key column predicates with =, >, <, >=, <= predicate!
* 2. Only push down primary key column predicates with = or IN predicate.!
* 3. If there are regular columns in the pushdown predicates, they should have!
* at least one EQ expression on an indexed column and no IN predicates.!
* 4. All partition column predicates must be included in the predicates to be pushed down,!
* only the last part of the partition key can be an IN predicate. For each partition column,!
* only one predicate is allowed.!
* 5. For cluster column predicates, only last predicate can be non-EQ predicate!
* including IN predicate, and preceding column predicates must be EQ predicates.!
* If there is only one cluster column predicate, the predicates could be any non-IN
predicate.!
* 6. There is no pushdown predicates if there is any OR condition or NOT IN condition.!
* 7. We're not allowed to push down multiple predicates for the same column if any of them!
* is equality or IN predicate.!
spark-cassandra-connector/âŠ/o.a.s.sql.cassandra.PredicatePushDown.scala!
36. IBM | spark.tc
Special Thanks to DataStax!!!!
Russel Spitzer!
@RussSpitzer!
(He created the following few slides)!
(These guys built a lot of the connector.)!
46. IBM | spark.tc
Spark-Cassandra Optimizatins and Next Steps!
By-pass CQL front door!
Bulk read/write directly to SSTables!
Rumored to be in existence!
DataStax Enterprise only?!
Closed Source Alert!!
47. IBM | spark.tc
Redshift Data Source (Databricks)!
Github!
https://github.com/databricks/spark-redshift!
Maven!
com.databricks:spark-redshift:0.5.0!
Code!
val df: DataFrame = sqlContext.read
.format("com.databricks.spark.redshift")
.option("url", "jdbc:redshift://<hostname>:<port>/<database>âŠ")
.option("query", "select x, count(*) my_table group by x")
.option("tempdir", "s3n://tmpdir")
.load(...)
Copies to S3 for !
fast, parallel reads vs !
single Redshift Master bottleneck!
48. IBM | spark.tc
Cloudant Data Source (IBM)!
Github!
http://spark-packages.org/package/cloudant/spark-cloudant!
Maven!
com.datastax.spark:spark-cassandra-connector_2.10:1.5.0-M1
Code!
ratingsDF.write.format("com.cloudant.spark")
.mode(SaveMode.Append)
.options(Map("cloudant.host"->"<account>.cloudant.com",
"cloudant.username"->"<username>",
"cloudant.password"->"<password>"))
.save("<filename>")
49. IBM | spark.tc
DB2 and BigSQL Data Sources (IBM)!
Coming Soon!!
!
!
!
https://github.com/SparkTC/spark-db2!
https://github.com/SparkTC/spark-bigsql!
!
50. IBM | spark.tc
REST Data Source (Databricks)!
Coming Soon!!
https://github.com/databricks/spark-rest?!
Michael Armbrust!
Spark SQL Lead @ Databricks!
52. IBM | spark.tc
SparkSQL Performance Tuning (oas.sql.SQLConf)!
spark.sql.inMemoryColumnarStorage.compressed=true!
Automatically selects column codec based on data!
spark.sql.inMemoryColumnarStorage.batchSize!
Increase as much as possible without OOM â improves compression and GC!
spark.sql.inMemoryPartitionPruning=true!
Enable partition pruning for in-memory partitions!
spark.sql.tungsten.enabled=true!
Code Gen for CPU and Memory Optimizations (Tungsten aka Unsafe Mode)!
spark.sql.shufïŹe.partitions!
Increase from default 200 for large joins and aggregations!
spark.sql.autoBroadcastJoinThreshold!
Increase to tune this cost-based, physical plan optimization!
spark.sql.hive.metastorePartitionPruning!
Predicate pushdown into the metastore to prune partitions early!
spark.sql.planner.sortMergeJoin!
Prefer sort-merge (vs. hash join) for large joins !
spark.sql.sources.partitionDiscovery.enabled !
& spark.sql.sources.parallelPartitionDiscovery.threshold!
53. IBM | spark.tc
Related Links!
https://github.com/datastax/spark-cassandra-connector!
http://blog.madhukaraphatak.com/anatomy-of-spark-dataframe-api/!
https://github.com/phatak-dev/anatomy_of_spark_dataframe_api!
https://databricks.com/blog/!
https://www.youtube.com/watch?v=uxuLRiNoDio!
http://www.slideshare.net/RussellSpitzer!
54. IBM | spark.tc
Freg-a-palooza Upcoming World Tour
⯠London Spark Meetup (Oct 12th)!
⯠Scotland Data Science Meetup (Oct 13th)!
⯠Dublin Spark Meetup (Oct 15th)!
⯠Barcelona Spark Meetup (Oct 20th)!
⯠Madrid Spark/Big Data Meetup (Oct 22nd)!
⯠Paris Spark Meetup (Oct 26th)!
⯠Amsterdam Spark Summit (Oct 27th â Oct 29th)!
⯠Delft Dutch Data Science Meetup (Oct 29th) !
⯠Brussels Spark Meetup (Oct 30th)!
⯠Zurich Big Data Developers Meetup (Nov 2nd)!
High probability!
Iâll end up in jail!
or married!!
55. http://spark.tc/datapalooza
IBM Spark Tech Center is Hiring! "
JOnly Fun, Collaborative People!! J
IBM | spark.tc
Sign up for our newsletter at
Thank You!
Power of data. Simplicity of design. Speed of innovation.
Coming to Your City!!!!
56. Power of data. Simplicity of design. Speed of innovation.
IBM Spark