Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.
1 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Apache	Spark	– Apache	HBase Connector
Feature	Rich	and	Efficient	Ac...
2 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
About	Authors
à Weiqing Yang
• Contribute	to	Apache	Spark,	Apache	H...
3 ©	Hortonworks	 Inc.	2011	– 2016.	All	Rights	Reserved
Agenda
Motivation
Overview
Architecture	&	Implementation
Usage	&	De...
4 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Motivation
à Limited	Spark	Support	in	HBase Upstream
– RDD	level
– ...
5 ©	Hortonworks	 Inc.	2011	– 2016.	All	Rights	Reserved
Overview
6 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Apache	Spark– Apache	HBase Connector	(SHC)
à Combine	Spark	and	HBas...
7 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Data	Coder	&	Data	Schema
à Support	Different	Data	Coders
– Primitiv...
8 ©	Hortonworks	 Inc.	2011	– 2016.	All	Rights	Reserved
Architecture	&	Implementation
9 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Architecture
…...
Driver
Executor Executor Executor
Region	
Server
...
10 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Architecture
…...
Driver
Executor Executor Executor
Region	
Server...
11 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Implementation
…...
Driver
Executor Executor Executor
Region	
Serv...
12 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Implementation
…...
Driver
Executor Executor Executor
Region	
Serv...
13 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Implementation
…...
Driver
Executor Executor Executor
Region	
Serv...
14 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Implementation
…...
Driver
Executor Executor Executor
Region	
Serv...
15 ©	Hortonworks	 Inc.	2011	– 2016.	All	Rights	Reserved
Usage	&	Demo
16 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
How	to	Use	SHC?
à Github
– https://github.com/hortonworks-spark/sh...
17 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Demo
à Interactive	Jobs	through	Spark Shell
à Batch	Jobs
18 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Acknowledgement
à HBase Community	&	Spark	Community
à All	SHC Cont...
19 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Reference
à Hortonworks	Public	Repo
– http://repo.hortonworks.com/...
20 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Thanks
Q	&	A
Emails:	
wyang@hortonworks.com
21 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
BACKUP
22 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Kerberos	Cluster
à Kerberos	Ticket
– kinit	-kt foo.keytab foouser ...
23 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Usage
Define	the	catalog	for	the	schema	mapping:
24 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Usage
à Prepare	the	data	and	populate	the	HBase table
val data	=	(...
25 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Usage
à Load	the	DataFrame
def withCatalog(cat:	String):	DataFrame...
26 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Usage
à Query
Language	integrated	query:
val s	=	df.filter((($"col...
27 ©	Hortonworks	 Inc.	2011	– 2017.	All	Rights	Reserved
Usage
à Work	with	different	data	sources
//	Part	1:	write	data	int...
Nächste SlideShare
Wird geladen in …5
×

Apache Spark—Apache HBase Connector: Feature Rich and Efficient Access to HBase through Spark SQL with Weiqing Yang

2.125 Aufrufe

Veröffentlicht am

Both Spark and HBase are widely used, but how to use them together with high performance and simplicity is a very challenging topic. Spark HBase Connector(SHC) provides feature rich and efficient access to HBase through Spark SQL. It bridges the gap between the simple HBase key value store and complex relational SQL queries and enables users to perform complex data analytics on top of HBase using Spark. SHC implements the standard Spark data source APIs, and leverages the Spark catalyst engine for query optimization. To achieve high performance, SHC constructs the RDD from scratch instead of using the standard HadoopRDD. With the customized RDD, all critical techniques can be applied and fully implemented, such as partition pruning, column pruning, predicate pushdown and data locality. The design makes the maintenance easy, while achieving a good tradeoff between performance and simplicity. In addition to fully supporting all the Avro schemas natively, SHC has also integrated natively with Phoenix data types. With SHC, Spark can execute batch jobs to read/write data from/into Phoenix tables. Phoenix can also read/write data from/into HBase tables created by SHC. For example, users can run a complex SQL query on top of an HBase table created by Phoenix inside Spark, perform a table join against an Dataframe which reads the data from a Hive table, or integrate with Spark Streaming to implement a more complicated system. In this talk, apart from explaining why SHC is of great use, we will also demo how SHC works, how to use SHC in secure/non-secure clusters, how SHC works with multiple secure HBase clusters, etc. This talk will also benefit people who use Spark and other data sources (besides HBase) as it inspires them with ideas of how to support high performance data source access at the Spark DataFrame level.

Veröffentlicht in: Daten & Analysen
  • Als Erste(r) kommentieren

Apache Spark—Apache HBase Connector: Feature Rich and Efficient Access to HBase through Spark SQL with Weiqing Yang

  1. 1. 1 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Apache Spark – Apache HBase Connector Feature Rich and Efficient Access to HBase through Spark SQL Weiqing Yang Mingjie Tang October, 2017
  2. 2. 2 © Hortonworks Inc. 2011 – 2017. All Rights Reserved About Authors à Weiqing Yang • Contribute to Apache Spark, Apache Hadoop, Apache HBase, Apache Ambari • Software Engineer at Hortonworks à Mingjie Tang • SparkSQL, Spark Mllib, Spark Streaming, Data Mining, Machine Learning • Software Engineer at Hortonworks à … All Other SHC Contributors
  3. 3. 3 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Agenda Motivation Overview Architecture & Implementation Usage & Demo
  4. 4. 4 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Motivation à Limited Spark Support in HBase Upstream – RDD level – But Spark Is Moving to DataFrame/Dataset à Existing Connectors in DataFrame Level – Complicated Design • Embedding Optimization Plan inside Catalyst Engine • Stability Impact with Coprocessor • Serialized RDD Lineage to HBase – Heavy Maintenance Overhead
  5. 5. 5 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Overview
  6. 6. 6 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Apache Spark– Apache HBase Connector (SHC) à Combine Spark and HBase – Spark Catalyst Engine for Query Plan and Optimization – HBase as Fast Access KV Store – Implement Standard External Data Source with Build-in Filter, Maintain Easily à Full Fledged DataFrameSupport – Spark SQL – Integrated Language Query à High Performance – Partition Pruning, Data Locality, Column Pruning, Predicate Pushdown – Use Spark UnhandledFilters API – Cache Spark HBase Connections
  7. 7. 7 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Data Coder & Data Schema à Support Different Data Coders – PrimitiveType: Native Support Java Primitive Types – Avro: Native Support Avro Encoding/Decoding – Phoenix: Phoenix Encoding/Decoding – Plug-In Data Coder – Can Run on the Top of Existing HBase Tables à Support Composite Key – def cat = s"""{ |"table":{"namespace":"default", "name":"shcExampleTable", "tableCoder":”Phoenix"}, |"rowkey":"key1:key2", |"columns":{ |"col00":{"cf":"rowkey", "col":"key1", "type":"string”}, |"col01":{"cf":"rowkey", "col":"key2", "type":"int"}, … ...
  8. 8. 8 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Architecture & Implementation
  9. 9. 9 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Architecture …... Driver Executor Executor Executor Region Server Region Server Region Server…... Spark HBase Picture 1. SHC architecture Host 1
  10. 10. 10 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Architecture …... Driver Executor Executor Executor Region Server Region Server Region Server…... Picture 1. SHC architecture Task Query Partition Filters, Required Columns RS start/end point sqlContext.sql("select count(col1) from table1 where key < 'row050'") PP P Scans BulkGets
  11. 11. 11 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Implementation …... Driver Executor Executor Executor Region Server Region Server Region Server…... Picture 1. SHC architecture Task Query Partition Filters, Required ColumnsPartition Pruning: Task Only Performed in Region Server Holding Requested Data PP P Scans BulkGets Filters -> Multiple Scan Ranges ∩ (Start point, end point) RS start/end point
  12. 12. 12 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Implementation …... Driver Executor Executor Executor Region Server Region Server Region Server…... Picture 1. SHC architecture Task Query Partition Filters, Required Columns RS start/end point Data Locality: Move Computation to Data. PP P Scans BulkGets RDD Partition has preferred location: getPreferredLocations(partition) { return RS.hostName}
  13. 13. 13 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Implementation …... Driver Executor Executor Executor Region Server Region Server Region Server…... Picture 1. SHC architecture Task Query Partition Filters, Required Columns RS start/end point Column Pruning: Required Column Predicate Pushdown: HBase built-in Filters PP P Filters, Required Columns Filters, Required Columns Scans BulkGets Filters, Required Columns
  14. 14. 14 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Implementation …... Driver Executor Executor Executor Region Server Region Server Region Server…... Picture 1. SHC architecture Task Query Partition Filters, Required Columns RS start/end point Scan and BulkGets: Grouped by region server. PP P Scans BulkGets WHERE column > x and column < y for scan and WHERE column = x for get.
  15. 15. 15 © Hortonworks Inc. 2011 – 2016. All Rights Reserved Usage & Demo
  16. 16. 16 © Hortonworks Inc. 2011 – 2017. All Rights Reserved How to Use SHC? à Github – https://github.com/hortonworks-spark/shc à SHC Examples – https://github.com/hortonworks-spark/shc/tree/master/examples à Apache HBase Jira – https://issues.apache.org/jira/browse/HBASE-14789
  17. 17. 17 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Demo à Interactive Jobs through Spark Shell à Batch Jobs
  18. 18. 18 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Acknowledgement à HBase Community & Spark Community à All SHC Contributors, Zhan Zhang
  19. 19. 19 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Reference à Hortonworks Public Repo – http://repo.hortonworks.com/content/repositories/releases/com/hortonworks/ à Apache Spark – http://spark.apache.org/ à Apache HBase – https://hbase.apache.org/
  20. 20. 20 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Thanks Q & A Emails: wyang@hortonworks.com
  21. 21. 21 © Hortonworks Inc. 2011 – 2017. All Rights Reserved BACKUP
  22. 22. 22 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Kerberos Cluster à Kerberos Ticket – kinit -kt foo.keytab foouser or Principle/Keytab à Long Running Service – --principal, --keytab à Multiple Secure HBase Clusters – Spark only Supports Single Secure HBase Cluster – Use SHC Credential Manager – Refer LRJobAccessing2Clusters Example in github
  23. 23. 23 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Usage Define the catalog for the schema mapping:
  24. 24. 24 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Usage à Prepare the data and populate the HBase table val data = (0 to 255).map { i => HBaseRecord(i, “extra”)} sc.parallelize(data).toDF.write.options( Map(HBaseTableCatalog.tableCatalog -> catalog, HBaseTableCatalog.newTable-> “5”)) .format(“org.apache.spark.sql.execution.datasources.hbase”) .save()
  25. 25. 25 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Usage à Load the DataFrame def withCatalog(cat: String): DataFrame = { sqlContext .read .options(Map(HBaseTableCatalog.tableCatalog->cat)) .format(“org.apache.spark.sql.execution.datasources.hbase”) .load() } val df = withCatalog(catalog)
  26. 26. 26 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Usage à Query Language integrated query: val s = df.filter((($"col0ʺ <= “çrow050ʺ && $”col0” > “row040”) || $”col0ʺ === “row005” && ($”col4ʺ === 1 || $”col4ʺ === 42)) .select(“col0”, “col1”, “col4”) SQL: val s = df.filter((($”col0ʺ <= “row050ʺ && $”col0” > “row040”) df.registerTempTable(“table”) sqlContext.sql(“select count(col1) from table”).show
  27. 27. 27 © Hortonworks Inc. 2011 – 2017. All Rights Reserved Usage à Work with different data sources // Part 1: write data into Hive table and read data from it val df1 = sql("SELECT * FROM shcHiveTable") // Part 2: read data from Hbase table val df2 = withCatalog(cat) // Part 3: join the two dataframes val s1 = df1.filter($"key" <= "40").select("key", "col1") val s2 = df2.filter($"key" <= "20" && $"key" >= "1").select("key", "col2") val result = s1.join(s2, Seq("key")) result.show()

×