SlideShare ist ein Scribd-Unternehmen logo
1 von 83
Nadav Wiener
Scala Tech Lead @ Riskified
Scala since 2007
Akka Streams since 2016
RISKIFIED
total employees
in New York and
Tel Aviv
$64M
in funding secured
to date
1.000.000
global orders reviewed
every day
1000
merchants, including
several publicly traded
companies
250
Time Windowing
Streaming Data Platforms
vs Libraries
Glazier: Event Time Windowing
Libraries
Spark /
Flink
Kafka
Streams
Akka
Streams
Monix /
fs2
Platforms
Poll
?
This was our
dilemma
Behavioral Data
Proxy
?
no proxy
?
Proxy
?
Gather lowest latencies
● per each session
● per 10 second window
We want to:
Browser HTTP Server
latencies
Gather lowest latencies
● per each session
● per 10 second window
Browser HTTP Server
latencies
write to journal
Gather lowest latencies
● per each session
● per 10 second window
Browser HTTP Server
latencies
write to journal
10 second windows (for each user)
Lowest Latency
Stream
Processing
lowest
latencies
Database
consume
Time Windowing
Time Windowing
Platforms (Spark/Flink):
Libraries (Akka Streams):
😀
😕
?Platforms Libraries
✔ Powerful
Platforms are:
✘ Big fish to catch
✘ Constraining
✔ Powerful
Platforms are:
but:
Platforms
Libraries
You are here
Spark /
Flink
Kafka
Streams
Akka
Streams
Monix /
fs2
Platforms
Libraries
Platforms
Libraries
You are here
Gather lowest latencies
● per each session
● per 10 second window
Browser HTTP Server
latencies
lowest
latencies
Database
write to journal
consume
Stream
Processing
Take #1
case class LatencyEntry(sessionId: String,
latency: Duration)
session id
latency
LatencyEntry
Stream Processing Take #1
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Partition into per-session substreams
Stream Processing Take #1
session id
latency
LatencyEntry
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Accumulate & emit every 10s
Partition into per-session substreams
Stream Processing Take #1
session id
latency
LatencyEntry
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Accumulate & emit every 10s
Lowest latency in accumulated data
Partition into per-session substreams
Stream Processing Take #1
session id
latency
LatencyEntry
latencySource
.groupBy(_.sessionId)
.groupedWithin(10.seconds)
.map(group => group.minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Accumulate & emit every 10s
Lowest latency in accumulated data
Partition into per-session substreams
Merge substreams &
send to downstream db
Stream Processing Take #1
session id
latency
LatencyEntry
But this is naive...
write to journal
Browser HTTP Server
latencies
1) Bring up only the HTTP server,
and wait for latencies to accumulate
1
1) Bring up only the HTTP server,
and wait for latencies to accumulate
2) Only then bring up stream processing
write to journal
consume
1Browser HTTP Server
latencies
2
Database
Stream
Processing
lowest
latencies
1) Bring up only the HTTP server,
and wait for latencies to accumulate
2) Only then bring up stream processing
Instead of this:
10 second windows (for each user)
Lowest Latency
2
Database
write to journal
consume
1Browser HTTP Server
latencies
lowest
latencies Stream
Processing
10 second windows (for each user)
Lowest Latency
We get this:
2
Database
write to journal
consume
1Browser HTTP Server
latencies
lowest
latencies Stream
Processing
WE SHOULDN’T BE
LOOKING AT THE
CLOCK
Processing
Time
Event
Time
Database
write to journal
consume
Browser HTTP Server
latencies
lowest
latencies Stream
Processing
Event
Time
● Timestamp as payload
● Plays well with
distributed systems
● Not available in libraries
Processing
Time
● Time derived from clock
● Less suitable for
business logic
● Available in libraries😕
😀
Event
Time
Processing
Time
?
Glazier
Event time windowing library
Glazier
Tour of the API
Under the hood
Glazier |+| Akka Streams
Glazier
case class LatencyEntry(sessionId: String,
latency: Duration,
timestamp: Timestamp)
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Event-time obtained from LatencyEntry.timestamp
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Partitioned by session id
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
10 second (event-time) windows
10s 10s
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
1 second grace period
Events not guaranteed to arrive in order,
Windows stay around for late events.
10s grace
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Emit lowest latency in window, once it closes
session id
latency
timestamp
LatencyEntry
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
.to(databaseSink)
Merge window substreams
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def tumbling(span: Span): WindowingFunction = ...
span span span span
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def tumbling(span: Span): WindowingFunction = { timestamp =>
val elapsed = timestamp % span
val start = timestamp - elapsed
List(Interval(start, start + span))
}
span span span span
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
span
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
spanstep
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
step
spanstep
Windowing Functions
type WindowingFunction = Timestamp => immutable.Seq[Interval]
def sliding(span: Span, step: Span): WindowingFunction = ...
step step step step step step step span
State
Event
Active
Windows
Logical
Clock
Instructions
Active
Windows
Logical
Clock
State
Step
Windowing State
case class Step(presentTime: Timestamp,
windows: Map[Interval, Set[Any]])
Timekeeping
Newer events advance 'presentTime’
Active
Windows
Logical
Clock
case class Step(presentTime: Timestamp,
windows: Map[Interval, Set[Any]])
Timekeeping
Windows represented as key-set by interval
Active
Windows
Logical
Clock
case class Step(presentTime: Timestamp,
windows: Map[Interval, Set[Any]])
Timekeeping
Newer events advance 'presentTime’
10s…20s
0s…10s
Timestamp 19s
1, 2
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Timekeeping
10s…20s
0s…10s
Timestamp 19s
1, 2
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Timekeeping
LatencyEntry(4, 100ms, 22s) 10s…20s
0s…10s 1, 2
2, 5Timestamp 22s
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Active Windows
10s…20s
0s…10s
Timestamp 22s
1, 2
2, 5LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Active Windows
20s…30s 4LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Instructions
20s…30s 4LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
def step[A](event: Event[A]): State[Step, Vector[Instruction[A]]] =
for {
_ <- advanceClock(event.timestamp, maxLateness)
closeInstructions <- closeWindows
openInstructions <- openWindows(event)
handleInstructions <- handleEvent(event)
} yield closeInstructions ++ openInstructions ++ handleInstructions
Instructions
20s…30s 4LatencyEntry(4, 100ms, 22s)
10s…20s
Timestamp 22s
2, 5
closeInstructions ++ openInstructions ++ handleInstructions == List(
WindowStatusChange(Window(1, Interval(0s, 10s)), Close),
WindowStatusChange(Window(2, Interval(0s, 10s)), Close),
WindowStatusChange(Window(4, Interval(20s, 30s)), Open),
HandleEvent(Window(4, Interval(20s, 30s)),
LatencyEntry(4, 100ms, 20s))
)
Instructions
20s…30s 4LatencyEntry(4, 100ms, 22s)
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Glazier |+| Akka Streams
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
def fromFlow[A](glazier: Glazier[A], flow: Flow[A, …]): SubFlow[…] =
flow
.scan(Glazier.Empty)((state, event) => glazier.runStep(state, event))
.mapConcat(_.instructions)
.groupBy(_.window)
.takeWhile {
case WindowStatusChange(_, WindowStatus.Close) => false
case _ => true
}
.collect { case HandleEvent(_, value) => value }
Akka Streams Support
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
User code in windowed substream
Akka Streams Support
latencySource
.assignTimestamps(_.timestamp)
.keyBy(_.sessionId)
.window(TumblingEventTimeWindows.of(Time.seconds(10)))
.allowedLateness(Time.seconds(1))
.reduceWith { case (r1, r2) => Seq(r1, r2).minBy(_.latency) }
latencySource
.timestampWith(_.timestamp)
.keyBy(_.sessionId)
.windowBy(Window.tumbling(10.seconds), maxLateness = 1.second)
.reduce(Seq(_, _).minBy(_.latency))
.mergeSubstreams
vs Flink API
Time Windowing
Spark/Flink:
Akka Streams:
with Glazier:
😀
😕
😀
Questions?
Takeaways
?Platforms Libraries
✘ Upfront investment
✘ Constraining
✔ Powerful
✔ Flexible
✘ Missing
functionality
✔ Flexible
Platforms
Libraries
You are here
Platforms
Libraries
Platforms
Libraries
Significant overlap
Thank you for
your time!
Thank you!
Glazier
https://github.com/riskified/glazier
“Streaming Microservices”, Dean Wampler
https://slideslive.com/38908773/kafkabased-microservices-with-akka-streams-and-kafka-streams
“Windowing data in Akka Streams”, Adam Warski
https://softwaremill.com/windowing-data-in-akka-streams/

Weitere ähnliche Inhalte

Was ist angesagt?

Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SPPrimeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Zabbix BR
 

Was ist angesagt? (20)

Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...
Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...
Cracking JWT tokens: a tale of magic, Node.js and parallel computing - Code E...
 
Streaming all the things with akka streams
Streaming all the things with akka streams   Streaming all the things with akka streams
Streaming all the things with akka streams
 
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SPPrimeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
Primeiros Passos na API do Zabbix com Python - 2º ZABBIX MEETUP DO INTERIOR-SP
 
Cassandra Metrics
Cassandra MetricsCassandra Metrics
Cassandra Metrics
 
Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...
Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...
Nagios Conference 2014 - Jack Chu - How to Think With Nagios to Solve Monitor...
 
Asynchronous stream processing with Akka Streams
Asynchronous stream processing with Akka StreamsAsynchronous stream processing with Akka Streams
Asynchronous stream processing with Akka Streams
 
Yet another node vs php
Yet another node vs phpYet another node vs php
Yet another node vs php
 
Introduction to apache zoo keeper
Introduction to apache zoo keeper Introduction to apache zoo keeper
Introduction to apache zoo keeper
 
Volker Fröhlich - How to Debug Common Agent Issues
Volker Fröhlich - How to Debug Common Agent IssuesVolker Fröhlich - How to Debug Common Agent Issues
Volker Fröhlich - How to Debug Common Agent Issues
 
Advanced Apache Cassandra Operations with JMX
Advanced Apache Cassandra Operations with JMXAdvanced Apache Cassandra Operations with JMX
Advanced Apache Cassandra Operations with JMX
 
Docker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic StackDocker Logging and analysing with Elastic Stack
Docker Logging and analysing with Elastic Stack
 
LinkRest at JeeConf 2017
LinkRest at JeeConf 2017LinkRest at JeeConf 2017
LinkRest at JeeConf 2017
 
Apache ZooKeeper
Apache ZooKeeperApache ZooKeeper
Apache ZooKeeper
 
GopherFest 2017 - Adding Context to NATS
GopherFest 2017 -  Adding Context to NATSGopherFest 2017 -  Adding Context to NATS
GopherFest 2017 - Adding Context to NATS
 
Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...
Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...
Monitoring as Code: Getting to Monitoring-Driven Development - DEV314 - re:In...
 
Sensu
SensuSensu
Sensu
 
Altitude SF 2017: Advanced VCL: Shielding and Clustering
Altitude SF 2017: Advanced VCL: Shielding and ClusteringAltitude SF 2017: Advanced VCL: Shielding and Clustering
Altitude SF 2017: Advanced VCL: Shielding and Clustering
 
OSMC 2017 | SNMP explained by Rob Hassing
OSMC 2017 | SNMP explained by Rob HassingOSMC 2017 | SNMP explained by Rob Hassing
OSMC 2017 | SNMP explained by Rob Hassing
 
Event Sourcing - what could possibly go wrong?
Event Sourcing - what could possibly go wrong?Event Sourcing - what could possibly go wrong?
Event Sourcing - what could possibly go wrong?
 
OSMC 2011 | Distributed monitoring using NSClient++ by Michael Medin
OSMC 2011 | Distributed monitoring using NSClient++ by Michael MedinOSMC 2011 | Distributed monitoring using NSClient++ by Michael Medin
OSMC 2011 | Distributed monitoring using NSClient++ by Michael Medin
 

Ähnlich wie About time

Ähnlich wie About time (20)

Unified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache FlinkUnified Stream and Batch Processing with Apache Flink
Unified Stream and Batch Processing with Apache Flink
 
Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)Flink 0.10 @ Bay Area Meetup (October 2015)
Flink 0.10 @ Bay Area Meetup (October 2015)
 
Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)
Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)
Unified Stream & Batch Processing with Apache Flink (Hadoop Summit Dublin 2016)
 
Flink. Pure Streaming
Flink. Pure StreamingFlink. Pure Streaming
Flink. Pure Streaming
 
Stream processing with Apache Flink - Maximilian Michels Data Artisans
Stream processing with Apache Flink - Maximilian Michels Data ArtisansStream processing with Apache Flink - Maximilian Michels Data Artisans
Stream processing with Apache Flink - Maximilian Michels Data Artisans
 
Logging for Production Systems in The Container Era
Logging for Production Systems in The Container EraLogging for Production Systems in The Container Era
Logging for Production Systems in The Container Era
 
Unbounded bounded-data-strangeloop-2016-monal-daxini
Unbounded bounded-data-strangeloop-2016-monal-daxiniUnbounded bounded-data-strangeloop-2016-monal-daxini
Unbounded bounded-data-strangeloop-2016-monal-daxini
 
Stream Processing with Apache Flink
Stream Processing with Apache FlinkStream Processing with Apache Flink
Stream Processing with Apache Flink
 
Beam me up, Samza!
Beam me up, Samza!Beam me up, Samza!
Beam me up, Samza!
 
Spark Streaming with Cassandra
Spark Streaming with CassandraSpark Streaming with Cassandra
Spark Streaming with Cassandra
 
Onyx data processing the clojure way
Onyx   data processing  the clojure wayOnyx   data processing  the clojure way
Onyx data processing the clojure way
 
Apache Flink Stream Processing
Apache Flink Stream ProcessingApache Flink Stream Processing
Apache Flink Stream Processing
 
Cloud Dataflow - A Unified Model for Batch and Streaming Data Processing
Cloud Dataflow - A Unified Model for Batch and Streaming Data ProcessingCloud Dataflow - A Unified Model for Batch and Streaming Data Processing
Cloud Dataflow - A Unified Model for Batch and Streaming Data Processing
 
Apache Beam (incubating)
Apache Beam (incubating)Apache Beam (incubating)
Apache Beam (incubating)
 
Norikra: SQL Stream Processing In Ruby
Norikra: SQL Stream Processing In RubyNorikra: SQL Stream Processing In Ruby
Norikra: SQL Stream Processing In Ruby
 
Deep dive into stateful stream processing in structured streaming by Tathaga...
Deep dive into stateful stream processing in structured streaming  by Tathaga...Deep dive into stateful stream processing in structured streaming  by Tathaga...
Deep dive into stateful stream processing in structured streaming by Tathaga...
 
Serverless London 2019 FaaS composition using Kafka and CloudEvents
Serverless London 2019   FaaS composition using Kafka and CloudEventsServerless London 2019   FaaS composition using Kafka and CloudEvents
Serverless London 2019 FaaS composition using Kafka and CloudEvents
 
K. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward KeynoteK. Tzoumas & S. Ewen – Flink Forward Keynote
K. Tzoumas & S. Ewen – Flink Forward Keynote
 
So you think you can stream.pptx
So you think you can stream.pptxSo you think you can stream.pptx
So you think you can stream.pptx
 
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
Introduction to Apache Beam & No Shard Left Behind: APIs for Massive Parallel...
 

Kürzlich hochgeladen

Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Christo Ananth
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Christo Ananth
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
rknatarajan
 

Kürzlich hochgeladen (20)

(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 

About time

Hinweis der Redaktion

  1. Riskified is the world's leading eCommerce fraud prevention company. We use machine learning and behavioural analytics to protect our customers from online fraud.
  2. How many familiar with Akka Streams? How many with Kafka or Flink? Faced with a decision between the two?
  3. “One thing we analyze is page visits to shops: what pages, when, and in particular—latencies”
  4. Let’s run this through
  5. Whale: public domain Butterfly: taken from wikipedia, should be commons-*
  6. No copyright info, appears to be in public domain
  7. How many use: Spark? Flink? Kafka Streams? Akka Streams? Monix? Fs2? How many of you are evaluating these as alternatives?
  8. Check out our other sessions here and drop by our booth to learn more about us!
  9. “...By the time events reach our code, timing for 10s windows is inaccurate, Worse: catching up with backlog will flood 10s windows”