Druid is an analytics-focused, distributed, scale-out data store. Existing Druid clusters have scaled to petabytes of data and trillions of events, ingesting millions of events every second. Up until version 0.10, Druid could only be queried in a JSON-based language that many users found unfamiliar.
Enter Apache Calcite. It includes an industry-standard SQL parser, validator, and JDBC driver, as well as a cost-based relational optimizer. Calcite bills itself as “the foundation for your next high-performance database” and is used by Hive, Drill, and a variety of other projects. Druid uses Calcite to power Druid SQL, a standards-based query API that vaults Druid out of the NoSQL world and into the SQL world.
Gian Merlino offers an overview of Druid SQL and explains how Druid and Calcite are integrated and why you should stop worrying and learn to love relational algebra in your own projects.
5. What is Druid?
● “high performance”: low query latency, high ingest rates
● “column-oriented”: best possible scan rates
● “distributed”: deployed in clusters, typically 10s–100s of nodes
● “data store”: the cluster stores a copy of your data
5
7. The Problem
● OLAP slice-and-dice for big data
● Interactive exploration
● Look under the hood of reports and dashboards
● And we want our data fresh, too
7
9. Challenges
● Scale: big data is tough to process quickly
● Complexity: too much fine grain to precompute
● High dimensionality: 10s or 100s of dimensions
● Concurrency: many users and tenants
● Freshness: load from streams
9
12. Powered by Druid
“The performance is great ... some of the tables that we have
internally in Druid have billions and billions of events in them,
and we’re scanning them in under a second.”
12
Source: https://www.infoworld.com/article/2949168/hadoop/yahoo-struts-its-hadoop-stuff.html
From Yahoo:
13. Druid Key Features
● Low latency ingestion from Kafka
● Bulk load from Hadoop
● Can pre-aggregate data during ingestion
● “Schema light”
● Ad-hoc queries
● Exact and approximate algorithms
● Can keep a lot of history (years are ok)
13
16. What is NoSQL?
“There's no strong definition of the concept out there, no
trademarks, no standard group, not even a manifesto.”
16
Source: https://martinfowler.com/bliki/NosqlDefinition.html
17. What is NoSQL?
Early examples:
Voldemort, Cassandra, Dynomite,
HBase, Hypertable, CouchDB, MongoDB
17
Source: https://martinfowler.com/bliki/NosqlDefinition.html
18. What is NoSQL?
What are they?
● Document stores
● Key/value stores
● Graph databases
● Timeseries databases
18
19. What is NoSQL?
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
19
Source: https://martinfowler.com/bliki/NosqlDefinition.html
20. Categorizing Druid
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
20
Source: https://martinfowler.com/bliki/NosqlDefinition.html
21. Categorizing Druid
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
21
Source: https://martinfowler.com/bliki/NosqlDefinition.html
22. Categorizing Druid
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
22
Source: https://martinfowler.com/bliki/NosqlDefinition.html
23. Categorizing Druid
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
23
Source: https://martinfowler.com/bliki/NosqlDefinition.html
24. Categorizing Druid
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
24
Source: https://martinfowler.com/bliki/NosqlDefinition.html
25. Categorizing Druid
● Not using the relational model (nor the SQL language)
● Open source
● Designed to run on large clusters
● Based on the needs of 21st century web properties
● No schema, allowing fields to be added to any record without
controls
25
Source: https://martinfowler.com/bliki/NosqlDefinition.html
27. The Relational Model
● The relational model is based around relations
● SQL calls them tables and those tables have columns
● SQL queries describe relational operations
○ Scan
○ Project
○ Filter
○ Aggregate
○ Union
○ Join
27
28. The Relational Model
28
timestamp product_id user_id revenue
2030-01-01 212 1 180.00
2030-01-01 998 2 24.95
Table: “sales”
Table: “products”
id name
212 Office chair
998 Coffee mug, 2-pack
Table: “users”
id country city user_gender user_age
1 US New York F 34
2 FR Paris M 28
29. Druid and the Relational Model
29
timestamp product country city gender age revenue
2030-01-01 Office chair US New York F 34 180.00
2030-01-01 Coffee mug,
2-pack
FR Paris M 28 24.95
Datasource: “sales”
30. Druid and the Relational Model
30
Datasource: “sales”
Lookup: “products”
id name
212 Office chair
998 Coffee mug, 2-pack
timestamp product_id country city gender age revenue
2030-01-01 212 US New York F 34 180.00
2030-01-01 998 FR Paris M 28 24.95
31. Druid and the Relational Model
● Datasources are like tables
○ Druid “lookups” apply to a common join use case
○ Big, flat tables are common in SQL databases anyway, when
analytical performance is critical
● Benefits of offering SQL
○ Developers and analysts know it
○ Integration with 3rd party apps
31
36. SQL query
SELECT dim1, COUNT(*)
FROM druid.foo
WHERE dim1 IN ('abc', 'def', 'ghi')
GROUP BY dim1
36
37. SQL parse tree
SELECT dim1, COUNT(*)
FROM druid.foo
WHERE dim1 IN ('abc', 'def', 'ghi')
GROUP BY dim1
37
Identifier“Select”
keyword
Operator
Identifier
Literal“Where”
keyword
“Group by”
keyword
38. Relational operators
SELECT dim1, COUNT(*)
FROM druid.foo
WHERE dim1 IN ('abc', 'def', 'ghi')
GROUP BY dim1
38
LogicalAggregate(group=[{0}], EXPR$1=[COUNT()])
LogicalProject(dim1=[$2])
LogicalFilter(condition=[OR(=($2, 'abc'), =($2, 'def'), =($2, 'ghi'))])
LogicalTableScan(table=[[druid, foo]])
39. Query planner
● Planner rules
○ Match certain relational operator patterns
○ Can transform one set of operators into another
○ New set must have same behavior, but may have a different cost
● HepPlanner (heuristic)
○ Applies all matching rules
● VolcanoPlanner (cost based)
○ Applies rules while searching for low cost plans
39
40. Using Calcite
Calcite can be embedded or it can
be used directly by end-users.
Druid SQL embeds Calcite.
40
53. SQL to Native translation
● Calcite implements:
○ SQL parser
○ Basic set of rules for reordering and combining operators
○ Rule-based optimizer frameworks
● Druid implements:
○ Construct Calcite catalog from Druid datasources
○ Cost functions guide reordering and combining operators
○ Rules to push operators one-by-one into a PartialDruidQuery
○ Convert PartialDruidQuery to DruidQuery
53
54. SQL to Native translation
Minimal performance overhead.
Can even be faster due to transferring less data to the client!
54
55. Challenges: Writing good queries
SQL makes it surprisingly easy to write inefficient queries.
Databases strive to optimize as best as they can.
But the “EXPLAIN” tool is still essential.
55
56. Challenges: Schema-lightness
● Druid is schema-light (columns and their types are flexible)
● SQL model has tables and columns with specific types
● Druid native queries use type coercions at query time (e.g. user
specifies: treat column XYZ as “string”)
● Druid SQL populates catalog with latest metadata
56
57. Challenges: Lookups
Think back to lookups.
57
Lookup: “products”
id name
212 Office chair
998 Coffee mug, 2-pack
timestamp product_id country city gender age revenue
2030-01-01 212 US New York F 34 180.00
2030-01-01 998 FR Paris M 28 24.95
58. Challenges: Lookups
SQL experts may think of this as a JOIN.
SELECT
products.name,
SUM(sales.revenue)
FROM sales JOIN products ON sales.product_id = products.id
GROUP BY products.name
58
59. Challenges: Lookups
Druid SQL does not support JOINs, but provides a “LOOKUP”
function instead.
SELECT
LOOKUP(id, ‘products’) AS product_name
SUM(sales.revenue)
FROM sales
GROUP BY product_name
59
60. Future work
● Druid features not supported in Druid SQL
○ Multi-value dimensions
○ Spatial filters
○ Theta sketches (approx. set intersection, differences)
● JOIN related
○ Allow users to write lookups as a SQL JOIN
○ Allow JOINs between two Druid datasources
● Others: SQL window functions, SQL UNION, GROUPING SETS
60