This document contains Twitter handles and URLs for individuals and companies involved with Apache Kafka and Confluent. It includes the Twitter handles for Jay Kreps, Apache Kafka, and Confluent, as well as the URLs for the Apache Kafka website and Confluent blog.
Introduce self
KIP-28
Stream processing in Kafka
But first…
This talk is going to be about the intersection of two trendy topics: distributed stream processing and microservices.
Most people would see these two things as mostly unrelated.
Microservices are all about chopping up big applications to enable small agile teams.
Most people, in-so-far as they think about stream processing at all, would think it as a kind of low-latency version of map-reduce.
I want to give a different vision for stream processing and show the relationship to microservices.
My experience in this area came from LinkedIn.
I was there from 2007-2014.
Few dozen engineers => several thousand
Niche to global web property
Several backends
How to scale software engineering?
Went with microservices architecture
Also built Apache Kafka and a stream processing framework and operated it as a service for the team there.
So how did the microservices journey go?
Pro: Did scale eng productivity
Con: Very hard to reason about latency and availability as # services grew
Reactive Summit - 2.png
Either big monolithic apps with huge amounts of work per request, or lots of little microservices…still all that work is synchronous.
Non-blocking I/O gives you concurrency but not asynchronicity
Blocking I/O doesn’t work at all. Limit on blocking calls is very small—like two.
Non-blocking I/O helps, but doesn’t change the fact that your availability and latency depends on the availability and latency of the entire graph.
Leslie Lamport: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable”
Testing all these failure modes is a huge pain.
Services aren’t free!
Make async things truly async…i.e. let them happen later, take them out of the service call graph entirely.
What do I mean by that?
Obviously if you are displaying a UI to a user and that UI needs some data, and you need to call a service to fetch that data then you can’t make that fetch truly asynchronous because you can’t put data in a UI until you have the data. You can defintiely make it non-blocking—other work can happen while you are waiting—but you can’t not wait.
So are there things that can be asynchronous?
Hell yes.
Let’s look at an example I like because most people understand the domain: Retail.
This could be ecommerce or a big box retailer…doesn’t really matter since we’re going to keep this high level.
You can think of the computation a retailer does as processing a sequence of sales and new product shipments, managing inventory, adjusting prices, handling logistics for fullfillment or stocing warehouses, and dealing with frad and analytics, etc.
Which of these operations that this business does is synchronous and which is asynchronous?
Well clearly the sale is synchronous. You give me money and I give you your product (or a promise of delivery of your product). That is the definition of a synchronous action. But pretty much everything else on this slide is asynchronous.
So how is that stuff implemented? Well I think one of three things happen:
It get’s made accidentally synchronous—either in a monolithic app or in a microservice
It get’s run as a batch job once a day...super async
You use a messaging system
Queues: Good in theory, bad in practice
Theory:
- You need an intermediate store
- “Reliable Broadcast”
Practice:
- World’s worst data store: unreliable, inflexible
Unscalable
Just adding complexity and solving no problem
Every University in the world has a group in the CS department working on advancing the state of the art of databases. None have a group working on messaging.
I don’t even think the companies that build messaging systems have people thinking about this.
Typical solution: Enterprise messaging systems
Not really a solution for microservices
No scalability, can’t be operated as an elastic always on services
Streaming platform is the successor to messaging
Stream processing is how you build asynchronous services.
That is going to be the key to solving my pipeline sprawl problem.
Instead of having N^2 different pipelines, one for each pair of systems I am going to have a central place that hosts all these event streams—the streaming platform.
This is a central way that all these systems and applications can plug in to get the streams they need.
So I can capture streams from databases, and feed them into DWH, Hadoop, monitoring and analytics systems.
They key advantage is that there is a single integration point for each thing that wants data.
Now obviously to make this work I’m going to need to ensure I have met the reliability, scalability, and latency guarantees for each of these systems.
Database data, log data
Lots of systems—databases, specialized system like search, caches
Business units
N^2 connections
Tons of glue code to stitch it all together
This is what that architecture looks like relying on streaming.
Two key uses:
Acts as a data pipeline between data systems and apps
Acts as a backbone for streams of data for stream processing
I’ve talked about events and the case for asynchronous services. And I’ve mentioned stream processing a few times but haven’t really said what I mean by it or what it is good for.
So I’ll explain what stream processing is and then I’ll talk about how you do stream processing with Kafka.
Lot’s of ways to categorize computer programs: maybe functional vs object oriented, or distributed vs centralized.
One of the most central ways to categorize is how the program gets its iinputs and how those are are translated into outputs
After all this is what computer programs do, right, they translate inputs into outputs.
3 major categorizes, the first two everyone knows: request/response and batch
The third many people have never heard of, and those who have often misunderstand it.
HTTP/REST
All databases
Run all the time
Each request totally independent—No real ordering
Can fail individual requests if you want
Very simple!
About the future!
“Ed, the MapReduce job never finishes when you watch it like that”
Job kicks off at a certain time
Cron!
Processes all the input, produces all the input
Data is usually static
Hadoop!
DWH, JCL
Archaic but powerful. Can do analytics! Compex algorithms!
Also can be really efficient!
Inherently high latency
Generalizes request/response and batch.
Program takes some inputs and produces some outputs
Could be all inputs
Could be one at a time
Runs continuously forever!
Doesn’t mean you drop everything on the floor if anything slows down
Streaming algorithms—online space
Can compute median
Companies == streams of events
What a retail store do
Streams
Processes they execute can often be though of as stream processing.
So what is Kafka?
The second half of this talk will dive into what Kafka is.
It’s a streaming platform.
Lets you publish and subscribe to streams of data, stores them reliably, and lets you process them in real time.
The second half of this talk will dive into Apache Kafka and talk about it acts as streaming platform and let’s you build event-driven stream processing microservices.
Events = Record = Message
Timestamp, an optional key and a value
Key is used for partitioning. Timestamp is used for retention and processing.
Not an apache log
Different: Commit log
Stolen from distributed database internals
Key abstraction for systems, real-time processing, data integration
Formalization of a stream
Reader controls progress—unifies batch and real-time
Relate to pub/sub
Change to Logs Unify Batch and stream processing
We talked about how a table can be represented as a stream of updates and this is a common use of a log.
Many change data capture solutions work this way, they capture a log of changes from a database and replication it to destination databases.
And in fact, Kafka has special support for this type of log.
World is a process/threads (total order) but no order between
Four APIs to read and write streams of events
First two are easy, the producer and consumer allow applications to read and write to Kafka.
The connect API allows building connectors that integrate Kafka with existing systems or applications.
The streams api allows stream processing on top of Kafka.
We’ll go through each of these briefly.
Core: Data pipeline
Venture bet: Stream processing
So in effect a stream processing app is basically just some code that consumes input and produces output.
So why not just use the producer and consumer APIs?
Well, it turns out there are some hard parts to doing real-time stream processing.
How do I partition up the processing and make it possible to dynamically scale my application up or down?
How do I handle failures in my processing without losing message?
How do I do processing that spans multiple records. For example, I might want to join an input streams of events representing customer activity to a database of side information about my customers, which is also evolving. Or I might want to count the number of customer events that occur in a given window of time.
Finally if I update my code, how do I go back and rerun my program with the new logic? What does this process of code evolution look like?
“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?” – Brian Kernighan
K in K&R
Planned from the beginning.
Early prototypes in 2010, Samza evolved out of that.
Goal is to let you get back to this picture, but let you build really sophisticated apps that are transparently distributed, fault-tolerant, and do non-trivial things with data.
TODO: Like Streams library or scala collections or reactive thingies BUT stateful, fault-tolerant, distributed
This is a simple java main method
First, do some configutation to tell it which Kafka cluster to talk to
Next tell it how to serialize this data.
Then express my transformations.
So if we zoom in on those transformations
This is a word count that computes a running count for each word.
Since this is a streaming count the count updates as new values with new words appear. So the output is properly interpreted as a “count so far”.
Could be wrong: don’t know shit about reactive
Kafka is reactive in the sense of the “Reactive Manifesto”
Similarity: declarative, observer pattern
Fundamental difference: sync vs async services
Huge difference in practical problem domain
Doesn’t implement reactive streams api, in fact doesn’t even make any sense, no such thing as back pressure
Sync services can just fail and send back an error
Async services need to eventually process everything
Reprocessing
State
Simple library—takes input, let’s the app do transformation, publishes back results.
Gives you a convenient, declarative DSL for doing transformations on data.
Gives you powerful windowing capabilities based on the timestamp in the event so it handles out-of-order events well.
Let’s you reprocess data.
Works with low latency
Allows powerful stateful processing for joins and aggregations.
TODO: Summarize
Change to “Logs make reprocessing easy”
Time is hard
Need a model of time
Request/Response ignores the issue, you just set an aggressive timeout
Batch solves the issue usually by just freezing all data for the day
Stream processing needs to actually address the issue
Curing the MapReduce hangover
- Storm cluster in Mesos in AWS (docker?)
- Decouple deployment etc
- Libraries are really simple
Config, packaging, deployment
Kafka Streams:
Manage the set of live processors and route data to them
Uses Kafka’s group management facility
External framework
Start and restart processes
Package processes
Deploy code
Kafka Streams:
Manage the set of live processors and route data to them
Uses Kafka’s group management facility
External framework
Start and restart processes
Package processes
Deploy code
We talked about this retail example where we have an input stream of sales that are occuring and an input stream of shipments of new products that are arriving.
Well computed off this stream of sales and shipments is a table—the inventory on hand right now in each location.
And this combination of sales with the inventory on hand is what is going to drive the process of reordering or raising the price of products that are selling out.
The ability to combine tables of stored state, with streams of events is really core to stream processing in real-life examples. And it is one of the more powerful features of Kafka Streams.
In fact, we talked about change logs for replicating updates to a mutable data store like a relational database. And Kafka Streams works really well with this type of stream.
Instead of just taking the change stream out of a source database, and putting it in a destination, it allows you to transform that stream on the fly.
In effect this lets you create a kind of materialized view computed off the input.
And using the connect api you can replicate that stream into any type of destination store.
In fact the connect and streams apis work really well together.
If you think about ETL, meaning extract/transform/load, like you would have for a datawarehouse,
Connect is doing the E and the L, the extracting and loading, except it is doing them in real-time as a continuous stream.
And Streams is doing the T, the transformation...and of course it is also streaming.
It’s a streaming platform.
Lets you publish and subscribe to streams of data, stores them reliably, and lets you process them in real time.
The second half of this talk will dive into Apache Kafka and talk about it acts as streaming platform and let’s you build event-driven stream processing microservices.