1) Event-driven microservices involve microservices communicating primarily through events published to an event backbone. This loosely couples microservices and allows for eventual data consistency.
2) Apache Kafka is an open-source streaming platform that can be used to build an event backbone, allowing microservices to reliably publish and subscribe to events. It supports streaming, storage, and processing of event data.
3) Common patterns for event-driven microservices include database per service for independent data ownership, sagas for coordinated multi-step processes, event sourcing to capture all state changes, and CQRS to separate reads from writes.
Why event-driven?
Proper loose-coupling means asynchronous communication
Enables responsive applications
In the real world, things often take time to complete
Kafka is a great choice for the event backbone
Publish/subscribe
Stream history
Partitioning – workload distribution and ordering
Stream processing takes a sequence of data (the stream) and applies a sequence of processing to each element in the stream.
Optimised for this continuous, event-at-a-time processing
Techniques such as pipelining, or batching of transactions.
Outcome is a design built from loosely coupled microservices linked through an event-driven architecture using one or more event backbones
6-8 people including domain experts and stakeholders – sticky notes, stand up, collaborate
Domain event – past tense ”order completed”
Actor – users
Command – action/decision
Data - needed for the commands
Phases
Domain events, placed on a timeline
Commands
Data
Aggregates – group together related events and commands and data – potential boundary of microservices
Could use Kafka’s stream-table duality which layers a database on topic of a key-value-based stream