KSQL is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. KSQL is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
KSQL offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using KSQL for most part. This will be done in a live demo on a fictitious IoT sample.
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
KSQL - Stream Processing simplified!
1. BASEL BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENF
HAMBURG KOPENHAGEN LAUSANNE MÜNCHEN STUTTGART WIEN ZÜRICH
KSQL
Stream Processing leicht gemacht!
Guido Schmutz
DOAG Big Data 2018 – 20.9.2018
@gschmutz guidoschmutz.wordpress.com
2. Guido Schmutz
Working at Trivadis for more than 21 years
Oracle ACE Director for Fusion Middleware and SOA
Consultant, Trainer Software Architect for Java, Oracle, SOA and
Big Data / Fast Data
Head of Trivadis Architecture Board
Technology Manager @ Trivadis
More than 30 years of software development experience
Contact: guido.schmutz@trivadis.com
Blog: http://guidoschmutz.wordpress.com
Slideshare: http://www.slideshare.net/gschmutz
Twitter: gschmutz
5. Apache Kafka – A Streaming Platform
High-Level Architecture
Distributed Log at the Core
Scale-Out Architecture
Logs do not (necessarily) forget
6. Hold Data for Long-Term – Data Retention
Producer 1
Broker 1
Broker 2
Broker 3
1. Never
2. Time based (TTL)
log.retention.{ms | minutes | hours}
3. Size based
log.retention.bytes
4. Log compaction based
(entries with same key are removed):
kafka-topics.sh --zookeeper zk:2181
--create --topic customers
--replication-factor 1
--partitions 1
--config cleanup.policy=compact
14. Choosing the Right API
• Java, c#, c++,
scala, phyton,
node.js, go, php
…
• subscribe()
• poll()
• send()
• flush()
• Anything Kafka
• Fluent Java API
• mapValues()
• filter()
• flush()
• Stream Analytics
• SQL dialect
• SELECT … FROM
…
• JOIN ... WHERE
• GROUP BY
• Stream Analytics
Consumer,
Producer API
Kafka Streams KSQL
• Declarative
• Configuration
• REST API
• Out-of-the-box
connectors
• Stream
Integration
Kafka Connect
Flexibility Simplicity
Source: adapted from Confluent
15. Demo (II) – Connect to MQTT through Kafka Connect
truck/nn/
position
mqtt to
kafka
Driving
Info
Position
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
truck_positio
n
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
17. KSQL: a Streaming SQL Engine for Apache Kafka
• Enables stream processing with zero coding required
• The simples way to process streams of data in real-time
• Powered by Kafka and Kafka Streams: scalable, distributed, mature
• All you need is Kafka – no complex deployments
• available as Developer preview!
• STREAM and TABLE as first-class citizens
• STREAM = data in motion
• TABLE = collected state of a stream
• join STREAM and TABLE
18.
19. KSQL Architecture & Components
KSQL Server
• runs the engine that executes KSQL queries
• includes processing, reading, and writing data to and from the target Kafka cluster
• KSQL servers form KSQL clusters and can run in containers, virtual machines, and
bare-metal machines
• You can add and remove servers to/from the same KSQL cluster during live
operations to elastically scale KSQL’s processing capacity as desired
• You can deploy different KSQL clusters to achieve workload isolation
KSQL CLI
• You can interactively write KSQL queries by using the KSQL command line interface
(CLI).
• KSQL CLI acts as a client to the KSQL server
• For production scenarios you may also configure KSQL servers to run in non-
interactive “headless” configuration, thereby preventing KSQL CLI access
20. Demo (IV) - Start Kafka KSQL
$ docker-compose exec ksql-cli ksql-cli local --bootstrap-server broker-1:9092
======================================
= _ __ _____ ____ _ =
= | |/ // ____|/ __ | | =
= | ' /| (___ | | | | | =
= | < ___ | | | | | =
= | . ____) | |__| | |____ =
= |_|______/ __________| =
= =
= Streaming SQL Engine for Kafka =
Copyright 2017 Confluent Inc.
CLI v0.1, Server v0.1 located at http://localhost:9098
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql>
21. Terminology
Stream
• an unbounded sequence of structured data
(“facts”)
• Facts in a stream are immutable: new facts
can be inserted to a stream, but existing
facts can never be updated or deleted
• Streams can be created from a Kafka topic
or derived from an existing stream
• A stream’s underlying data is durably stored
(persisted) within a Kafka topic on the Kafka
brokers
Table
• materialized View of events with only the
latest value for a key
• a view of a stream, or another table, and
represents a collection of evolving facts
• the equivalent of a traditional database table
but enriched by streaming semantics such
as windowing
• Facts in a table are mutable: new facts can
be inserted to the table, and existing facts
can be updated or deleted
• Tables can be created from a Kafka topic or
derived from existing streams and tables
22. CREATE STREAM
Create a new stream, backed by a Kafka topic, with the specified columns and
properties
Supported column data types:
• BOOLEAN, INTEGER, BIGINT, DOUBLE, VARCHAR or STRING
• ARRAY<ArrayType>
• MAP<VARCHAR, ValueType>
• STRUCT<FieldName FieldType, ...>
Supports the following serialization formats: CSV, JSON, AVRO
KSQL adds the implicit columns ROWTIME and ROWKEY to every stream
CREATE STREAM stream_name ( { column_name data_type } [, ...] )
WITH ( property_name = expression [, ...] );
23. CREATE TABLE
Create a new table with the specified columns and properties
Supports same data types as CREATE STREAM
KSQL adds the implicit columns ROWTIME and ROWKEY to every table as well
KSQL has currently the following requirements for creating a table from a Kafka topic
• message key must also be present as a field/column in the Kafka message value
• message key must be in VARCHAR aka STRING format
CREATE TABLE table_name ( { column_name data_type } [, ...] ) WITH (
property_name = expression [, ...] );
24. Demo (III) – Create a STREAM on truck_driving_info
truck/nn/
position
mqtt to
kafka
Position truck_positio
n
Driving
Info
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
Stream
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
25. Demo (III) - Create a STREAM on truck_driving_info
ksql> CREATE STREAM truck_driving_info_s
(ts VARCHAR,
truckId VARCHAR,
driverId BIGINT,
routeId BIGINT,
eventType VARCHAR,
correlationId VARCHAR)
WITH (kafka_topic='truck_driving_info',
value_format=‘JSON');
Message
----------------
Stream created
26. Demo (III) - Create a STREAM on truck_driving_info
ksql> describe truck_position_s;
Field | Type
---------------------------------
ROWTIME | BIGINT
ROWKEY | VARCHAR(STRING)
TS | VARCHAR(STRING)
TRUCKID | VARCHAR(STRING)
DRIVERID | BIGINT
ROUTEID | BIGINT
EVENTTYPE | VARCHAR(STRING)
LATITUDE | DOUBLE
LONGITUDE | DOUBLE
CORRELATIONID | VARCHAR(STRING)
27. SELECT
Selects rows from a KSQL stream or table
Result of this statement will not be persisted in a Kafka topic and will only be printed out
in the console
from_item is one of the following: stream_name, table_name
SELECT select_expr [, ...]
FROM from_item
[ LEFT JOIN join_table ON join_criteria ]
[ WINDOW window_expression ]
[ WHERE condition ]
[ GROUP BY grouping_expression ]
[ HAVING having_expression ]
[ LIMIT count ];
28. Demo (III) – Use SELECT to browse from Stream
truck/nn/
position
mqtt to
kafka
KSQL CLI
Driving
Info
Position
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
truck_positio
n
Stream
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
30. CREATE STREAM … AS SELECT …
Create a new KSQL table along with the corresponding Kafka topic and stream the
result of the SELECT query as a changelog into the topic
WINDOW clause can only be used if the from_item is a stream
CREATE STREAM stream_name
[WITH ( property_name = expression [, ...] )]
AS SELECT select_expr [, ...]
FROM from_stream [ LEFT | FULL | INNER ]
JOIN [join_table | join_stream]
[ WITHIN [(before TIMEUNIT, after TIMEUNIT) | N TIMEUNIT] ] ON join_criteria
[ WHERE condition ]
[PARTITION BY column_name];
31. INSERT INTO … AS SELECT …
Stream the result of the SELECT query into an existing stream and its underlying topic
schema and partitioning column produced by the query must match the stream’s
schema and key
If the schema and partitioning column are incompatible with the stream, then the
statement will return an error
stream_name and from_item must both
refer to a Stream. Tables are not supported!
CREATE STREAM stream_name ...;
INSERT INTO stream_name
SELECT select_expr [., ...]
FROM from_stream
[ WHERE condition ]
[ PARTITION BY column_name ];
32. Demo (IV) – CREATE AS … SELECT …
truck/nn/
position
mqtt to
kafka
Position
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
truck_positio
n
detect_dangerou
s_driving
Driving
Info
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
Stream
Stream
dangerous_
driving
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
34. Functions
Scalar Functions
• ABS, ROUND, CEIL, FLOOR
• ARRAYCONTAINS
• CONCAT, SUBSTRING, TRIM
• EXTRACJSONFIELD
• GEO_DISTANCE
• LCASE, UCASE
• MASK, MASK_KEEP_LEFT,
MASK_KEEP_RIGHT, MASK_LEFT,
MASK_RIGHT
• RANDOM
• STRINGTOTIMESTAMP,
TIMESTAMPTOSTRING
Aggregate Functions
• COUNT
• MAX
• MIN
• SUM
• TOPK
• TOPKDISTINCT
User-Defined Functions (UDF) and User-
Defined Aggregate Functions (UDAF)
• Currently only supported using Java
35. Windowing
Introduction to Stream Processing
Since streams are unbounded, you need
some meaningful time frames to do
computations (i.e. aggregations)
Computations over events done using
windows of data
Windows give the power to keep a
working memory and look back at recent
data efficiently
Windows are tracked per unique key
Time
Stream of Data Window of Data
36. Sliding Window (aka
Hopping Window) - uses
eviction and trigger policies
that are based on time: window
length and sliding interval
length
Fixed Window (aka Tumbling
Window) - eviction policy always
based on the window being full
and trigger policy based on
either the count of items in the
window or time
Session Window – composed
of sequences of temporarily
related events terminated by a
gap of inactivity greater than
some timeout
Windowing
Introduction to Stream Processing
Time TimeTime
37. Demo (IV) – Aggregate and Window
truck/nn/
position
mqtt to
kafka
detect_dangerou
s_driving
Driving
Info
Position
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
truck_positio
n
Stream
Stream
dangerous_
driving
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
count_by_
eventType
Table
Count_by_
evnet_type
38. Demo (IV) – SELECT COUNT … GROUP BY
ksql> CREATE TABLE dangerous_driving_count AS
SELECT eventType, count(*) nof
FROM dangerous_driving_s
WINDOW TUMBLING (SIZE 30 SECONDS)
GROUP BY eventType;
Message
----------------------------
Table created and running
ksql> SELECT TIMESTAMPTOSTRING(ROWTIME, 'yyyy-MM-dd HH:mm:ss.SSS’),
eventType, nof
FROM dangerous_driving_count;;
2018-09-19 20:10:59.587 | Overspeed | 1
2018-09-19 20:11:15.713 | Unsafe following distance | 1
2018-09-19 20:11:39.662 | Unsafe tail distance | 1
2018-09-19 20:12:03.870 | Unsafe following distance | 1
2018-09-19 20:12:04.502 | Overspeed | 1
2018-09-19 20:12:05.856 | Lane Departure | 1
39. Joining
Introduction to Stream Processing
Challenges of joining streams
1. Data streams need to be aligned as they
come because they have different timestamps
2. since streams are never-ending, the joins
must be limited; otherwise join will never end
3. join needs to produce results continuously as
there is no end to the data
Stream to Static (Table) Join
Stream to Stream Join (one window join)
Stream to Stream Join (two window join)
Stream-
to-Static
Join
Stream-
to-Stream
Join
Stream-
to-Stream
Join
Time
Time
Time
40. Demo (V) – Join Table to enrich with Driver data
truck/nn/
position
mqtt to
kafka
detect_dangerou
s_driving
Driving
Info
Position
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
truck_positio
n
Truck
Driver
jdbc-
source
trucking_
driver
27, Walter, Ward, Y, 24-JUL-85, 2017-10-02
15:19:00
{"id":27,"firstName":"Walter"
,"lastName":"Ward","availab
le":"Y","birthdate":"24-JUL-
85","last_update":15069230
52012}
StreamTable
Stream
join_dangerous_
driving_driver
Stream
dangerous_dr
iving_driver
dangerous_
driving
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
42. Demo (V) - Create Table with Driver State
ksql> CREATE TABLE driver_t
(id BIGINT,
first_name VARCHAR,
last_name VARCHAR,
available VARCHAR)
WITH (kafka_topic='truck_driver',
value_format='JSON',
key='id');
Message
----------------
Table created
43. Demo (V) - Create Table with Driver State
ksql> CREATE STREAM dangerous_driving_and_driver_s
WITH (kafka_topic='dangerous_driving_and_driver_s',
value_format='JSON')
AS SELECT driverId, first_name, last_name, truckId, routeId, eventtype
FROM truck_position_s
LEFT JOIN driver_t
ON dangerous_driving_and_driver_s.driverId = driver_t.id;
Message
----------------------------
Stream created and running
ksql> select * from dangerous_driving_and_driver_s;
1511173352906 | 21 | 21 | Lila | Page | 58 | 1594289134 | Unsafe tail distance
1511173353669 | 12 | 12 | Laurence | Lindsey | 93 | 1384345811 | Lane Departure
1511173435385 | 11 | 11 | Micky | Isaacson | 22 | 1198242881 | Unsafe tail
distance
44. Demo (VI) – Stream-to-Stream Join
truck/nn/
position
mqtt to
kafka
detect_dangerou
s_driving
Driving
Info
Position
truck/nn/
drving-info
mqtt to
kafka
truck_driving
info
truck_positio
n
join_dangerous_
and_position
Truck
Driver
jdbc-
source
trucking_
driver
27, Walter, Ward, Y, 24-JUL-85, 2017-10-02
15:19:00
{"id":27,"firstName":"Walter"
,"lastName":"Ward","availab
le":"Y","birthdate":"24-JUL-
85","last_update":15069230
52012}
StreamStreamTable
Stream
join_dangerous_
driving_driver
Stream
dangerous_dr
iving_driver
dangerous_
driving
Stream
dangerous_
driving_position
{"timestamp":1537343400827,"truckId":87,
"driverId":13,"routeId":987179512,"eventType":"Norma
l",
"correlationId":"-3208700263746910537"}
{"timestamp":1537342514539,"truckId":
87,"latitude":38.65,"longitude":-90.21}
47. Summary
KSQL is another way to work with data in Kafka => you can (re)use some of your SQL
knowledge
Similar semantics to SQL, but is for queries on continuous, streaming data
Well-suited for structured data (there is the ”S” in KSQL)
KSQL is dependent on “Kafka core”
• KSQL consumes from Kafka broker
• KSQL produces to Kafka broker
KSQL runs as a Java application and can be deployed to various resource managers
Use Kafka Connect or any other Stream Data Integration tool to bring your data into
Kafka first
48. Technology on its own won't help you.
You need to know how to use it properly.