This document outlines 10 principles for effective event-driven microservices using Kafka as the backbone:
1. Don't use Kafka for request/response and shopping carts, but for event streaming of business events.
2. Pick topics that have business significance like orders and payments.
3. Decouple publishers and subscribers and only use request/response where needed.
4. Use the Kafka log to regenerate state instead of journaling events.
5. Apply the single writer principle by changing data at the source service.
6. Leverage keeping datasets like reference data within the Kafka broker.
7. Prefer stream processing over maintaining historical views.
8. Sometimes historical views are needed, so
9. Inverse Conway Maneuver
The 'Inverse Conway Maneuver' recommends evolving your
team and organizational structure to promote your desired
architecture.
Org Structure
Software Architecture
26. As software engineers
we are inevitably
affected by the tools we
surround ourselves with.
Languages, frameworks,
even processes all act to
shape the software we
build.
59. How do we actually
do this?
10 (opinionated) principals for Streaming
Services
60. 1. Don’t use Kafka for
shopping carts!
(OK, you can, but use sparingly)
Shopping
Cart
UI
Service
Broker/durability/broadcast add
little to request response
61. Do use Kafka for event
driven archtectures.
Think “business events”. An order was created,
a payment was received, a trade was booked
etc. Pub/Sub.
GUI
UI
Service
Orders
Service
Returns
Service
Fulfilment
Service
Payment
Service
Stock
Service
62. 2. Pick Topics with
Business Significance
Orders Payments
Returns Invoices
63. Give your messages meaningful
IDs and version them
rdersService1-Order-1234-
Should relate to
the real world
Should be
Versioned
(if mutable)
Include the service
name
Note the key used for sharding in Kafka may not be this key
64. 3. Decouple publishers from
subscribers
GUI
UI
Service
Orders
Service
Returns
Service
Fulfilment
Service
Payment
Service
Stock
Service
65. Add Request/Response
only where needed
GUI
UI
Service
Orders
Service
Returns
Service
Fulfilment
Service
Payment
Service
Stock
Service
REST
66. 4. Use the log to
regenerate state
Avoid journaling incoming events
67. Event Source side effects
• Use offsets to tie these
back to the stream
• Store in:
• Kafka
• Kstreams state store
• Other DB
68. 5. Apply the Single
Writer Principal
• Change at source (by
calling that service)
• Let the change
propagate back
• Keep local copies read
only.
GUI
UI
Service
Orders
Service
Returns
Service
Fulfilment
Service
Payment
Service
Stock
Service
(1) Change
Orders at source
(2) Let the change propagate through
70. Leverage keeping only
the latest version (table
view)
Version 3
Version 2
Version 1
Version 2
Version 1
Version 5
Version 4
Version 3
Version 2
Version 1
71. Join & Process on the
fly
stream
Compacted
stream
Join
Orders
Customers
KafkaKafka Streams
72. 7. Prefer stream processing
over maintaining historic
views
UI
Service
Orders
Service
Returns
Service
Fulfilment
Service
Payment
Service
Stock
Service
Orders
Historic
“copies”
diverge
73. Join & Process on the
fly
stream
Compacted
stream
Join
Orders
Customers
KafkaKafka Streams
81. 10. Consider “Stream
Management” Services
Stream
Management
Kafka
• Retaining data => Admin tasks
• Similar to the role of a DBA
• Data Migration
• Repartitioning
• Latest/versioned
• Environment Management
• CQRS
84. So…1. Don’t use Kafka for shopping carts!
2. Pick Topics with Business Significance
3. Decouple publishers from subscribers
4. Use the log to regenerate state
5. Apply the Single Writer Principal
6. Leverage keeping datasets inside the broker
7. Prefer stream processing over maintaining
historic views
8. Sometimes you need historic views. => Replicate
Read Only
9. Use Schemas
10. Consider “Stream Management” Services
89. We need a data-centric
toolset to do this
All Your Data
Immutable
Streams
Event
Driven
Services
Stream
Management
Services
Legacy
CDC
View
Replication
Polyglotic
Persistence
tables streams
Streaming
Services
Today we’re going to talk about two fields which sit somewhat apart. Stream Processing & Business Systems
Services encourage us to share responsibilities. To rely on others.
To collaborate and evolve a business’s view of the world over time.
A world that is inherently decentralised.
Inherently spread across many interconnected systems.
Spreading many disparate subsets of our business’s state.
Yet our approach rarely reflects this.
We tend to think in centralised ways, we accumulate data. We protect it. We control it. We hoard it.
But it does not need to be this way.
Today we’re going to talk about two fields which sit somewhat apart. Stream Processing & Business Systems
Services encourage us to share responsibilities. To rely on others.
To collaborate and evolve a business’s view of the world over time.
A world that is inherently decentralised.
Inherently spread across many interconnected systems.
Spreading many disparate subsets of our business’s state.
Yet our approach rarely reflects this.
We tend to think in centralised ways, we accumulate data. We protect it. We control it. We hoard it.
But it does not need to be this way.
My name is Ben Stopford. I work as an engineer on Apache Kafka. In a previous life I did the most centralised thing you could possibly do. I built a single, central database for a large financial institution. This talk is about exploring the exact opposite approach to this problem. It’s about thinking of the world in terms of the evolution and provisioning of state across many systems. It’s about embracing decentralisation. It’s about thinking in streams.