Event sourcing captures all changes to application state as a sequence of events. These events are persisted in an event log and can be replayed to recover application state where as Eventuate is a toolkit for building applications composed of event-driven and event-sourced services that communicate via causally ordered event streams.
2. What is Event sourcing
Why Event sourcing
Command Query Responsibility Segregation (CQRS).
What is Eventuate
Coding with Eventuate
Demo
What is Event sourcing
Why Event sourcing
Command Query Responsibility Segregation (CQRS).
What is Eventuate
Coding with Eventuate
Demo
Agenda
3. Events happen in the past For example, "the seat was reserved," "the
cash was dispensed." Notice how we describe these events using the
past tense
Events are immutable facts that are only ever appended to an event
log.
Event sourcing captures all changes to application state as a
sequence of events.
These events are persisted in an event log and can be replayed to
recover application state
Events happen in the past For example, "the seat was reserved," "the
cash was dispensed." Notice how we describe these events using the
past tense
Events are immutable facts that are only ever appended to an event
log.
Event sourcing captures all changes to application state as a
sequence of events.
These events are persisted in an event log and can be replayed to
recover application state
What is Event sourcing
4. A/C Name Balance
016789 Bob 10
012345 Johan 20
Event
Debit
Credit
Debit
How should manage your Bank account balance .
1. Column in a sheet/table .
2. It should be sum of all credit and debit .
Why Event sourcing
5. C (Command) – Insert,Update,Delete
And
Q(query) - Select
Command Query Responsibility
Segregation (CQRS)
6. Eventuate is a toolkit for building applications composed of event-
driven and event-sourced services that communicate via causally
ordered event streams. Services can either be co-located on a single
node or distributed up to global scale.
Eventuate has a Java and Scala API, is written in Scala and built on
top of Akka, a toolkit for building highly concurrent, distributed, and
resilient message-driven applications on the JVM.
Eventuate provides plugins for a Cassandra storage backend and a
LevelDB storage backend.
Eventuate is a toolkit for building applications composed of event-
driven and event-sourced services that communicate via causally
ordered event streams. Services can either be co-located on a single
node or distributed up to global scale.
Eventuate has a Java and Scala API, is written in Scala and built on
top of Akka, a toolkit for building highly concurrent, distributed, and
resilient message-driven applications on the JVM.
Eventuate provides plugins for a Cassandra storage backend and a
LevelDB storage backend.
What is Eventuate
1The batch layer runs functions over the master dataset to precompute intermediate data called batch views.
2.The speed layer compensates for the high latency of the batch layer by providing low-latency updates using data that has yet to be precomputed into a batch view.
3.Queries are then satisfied by processing data from the serving layer views and the speed layer views, and merging the results.
1.The batch layer runs functions over the master dataset to precompute intermediate data called batch views.
2. Batch Layer has three component:
1Master data set:- which is immuatble and append only data set.
2 precomputing function : it is generally a map reduce function which operates on master data set and produce batch view.precomputing functions are use for high latency query like historical queries.
3 Batch View: It is a outcome of precomputed function
1.The master dataset is the only part of the Lambda Architecture that absolutely must be safeguarded from corruption.
2.Data is raw : When designing your Big Data system, you want to be able to answer as many questions as possible. To do so we need to store raw data in master dataset because if we store normalized data then we have to lose many facts of data but again it depends on use case ,what level of rawness of data we require .
3 Data is immutable : In immutability we can not update or delete data ,we can only append data in dataset.
There are some vital advantages of it:
a)Human-fault tolerance: if by mistake we added any bad data in dataset and after some time found this is bad we just remove this bad data and recompute on master data set.
b)Simplicity:Immutable dataset is simple because it doesn’t required to store indexes like for mutable data.
4Data is eternally true: The key consequence of immutability is that each piece of data is true in perpetuity.That is, a piece of data, once true, must always be true. Immutability wouldn’t make sense without this property.
1.Performance:
a)RA:Requires computational effort to process the
entire master dataset.
b)IA:Requires less computational resources but may generate much larger batch views.
2.Human-fault tolerance:
a)RA:Extremely tolerant of human errors because
the batch views are continually rebuilt.
b)IA:Doesn’t facilitate repairing errors in the batch views; repairs are ad hoc and may require estimates.
3.Generality :
a)RA: Complexity of the algorithm is addressed during precomputation, resulting in simple batch
views and low-latency, on-the-fly processing.
b)IA:Requires special tailoring; may shift complexity to on-the-fly query processing.
4.Conclusion: So conclusion of both algorithm is.
a)RA:Essential to supporting a robust data-processing system.
b)IA:Can increase the efficiency of your system, but only as a supplement to recomputation algorithms.
1. Random reads: This means the data it contains must be indexed.
2 scalability: Typically this implies that realtime views can be distributed across many machines. Now days sharding technique is widely use to meet scalability requirement in database.
3. Fault tolerance : Fault tolerance is accomplished by replicating data across machines so there are backups should a single machine fail.
1.Human fault tolerance:
a)If bug in batch job:Discard batch view and recompute it.
b) If bug in master data then just discard buggy data and re-process on old data.master dataset is immutable and append only dataset so we can easily discard buggy data.
c)If bug in query then re-deploy query layer.
2. In lambda Architecture we can use different alogorithm in each layar, like in batch layer we use Exact seach algorthm and in speed layer we can use Approximate seach algo.
3)Under the Lambda Architecture, results from the batch layer are always eventually consistent. As soon as a fresh batch update is completed, results from the batch layer are consistent.