This document discusses stream computing from an engineer's perspective. It begins by contrasting batch and stream processing, noting that stream processing handles data one record at a time with an emphasis on latency over throughput. The document then explores how to achieve scalability, performance, durability and availability in stream processing systems. It notes the tradeoffs between these goals and discusses challenges like handling failures. Specific open-source stream processing systems like Storm, Flink and Apex are then analyzed in terms of how they work, strengths, weaknesses and failure handling. The document concludes by discussing using distributed databases for state management in stream processing applications.
2. Batch vs. Stream
• Batch
• Process chunks of data instead of one at a time
• Throughput over latency (seconds, minutes, hours)
• E.g. MapReduce, Spark, Tez
• Stream
• Data processed one at a time
• Latency over throughput (microseconds, milliseconds)
• E.g. Storm, Flink, Apex, KafkaStreams, GearPump
3. Scalability, Performance, Durability, Availability
• How do we handle more data?
• Quickly?
• Without ever losing data or compute?
• And ensure the system keeps working, even if there are failures?
4.
5. What are the tradeoffs?
• If we focus on scalability, it’s harder to guarantee
• Durability – more moving pieces, more coordination, more failures
• Availability – more failures, harder to stay operational
• Performance – bottlenecks and synchronization
• If we focus on availability, it’s harder to guarantee
• Performance – monitoring and synchronization overhead
• Scalability and performance
• Durability – must recover without losing data
• If we focus on durability, it’s harder to guarantee
• Performance
• Scalability
6. Batch compute has it easy.
• Get scale-out and performance by adding hardware and taking longer
• Get durability with a durable data store and recompute
• Get availability by taking longer to recover (this makes life easier!)
• In stream processing, you don’t have time!
7. It’s not about performance and scale.
• Most platforms handle large volume of data relatively quickly
• It’s about:
• Ease of use – how quickly can I build a complex application? Not word count.
• Failure-handling – what happens when things break?
• Durability – how do I avoid losing data without sacrificing performance?
• Availability – how can I keep my system operational with a minimum of labor
and without sacrificing performance?
16. Where do the weakness come from?
• Nimbus was a single point of failure (fixed as of 1.0.0 release)
• Upstream bolt/spout failure triggers re-compute on entire tree
• Can only create parallel independent stream by having separate redundant
topologies
• Bolts/spouts share JVM Hard to debug
• Failed tuples cannot be replayed quicker than 1s (lower limit on Ack)
• No dynamic topologies
• Cannot add or remove applications without service interruption
• Poor resource sharing in large clusters
17.
18. Enter the Competition – Apache Flink
• Declarative functional API (like Spark)
• But, true streaming platform (sort of) with support for CEP
• Optimized query execution
• Weaknesses:
• Depends on network micro-batching under the hood!
• Not battle -tested
• Failures still affect the entire topology
22. So what’s different from Storm?
• Flink handles planning and optimization for you
• Abstracts lower level internals
• Clear semantics around windowing (which Storm has lacked)
• Failure handling is lightweight and fast!
• Exactly once processing (given appropriate connectors at start/end)
• Can run Storm
23. What can’t it do?
• Dynamically update topology
• Dynamically scale
• Recover from errors without stopping the entire DAG
• Allow fine-grained control of how data moves through the system –
locality, data partitioning, routing
• You can do these individually, but not all at once
• The high level API is a curse!
• Run in production (Maybe?)
26. Which are unique?
• Apache Beam (Google’s baby - unifies all the platforms)
• Apache Apex (Robust architecture, scalable, fast, durable)
• IBM InfoSphere Streams (proprietary, expensive, the best)
27. Let’s look at Apex
• Unique provenance
• Built for the business at Yahoo – not a research project
• Built for reliability and strict processing semantics, not performance
• Apex just works
• Strengths
• Dynamism
• Scalability
• Failure-handling
• Weaknesses
• No high-level API
• More complex architecture
33. So it’s the best? Sort of!
• Most robust failure-handling
• Allows fine-tuning of data flows and DAG setup
• Excellent exploratory UI
• But
• Learning curve
• No high-level API
• No machine learning support
• Built for business, not for simplicity
34. Streaming is great – what about state?
• What if I need to persist data?
• Across operators?
• Retrieve it quickly?
• Do complex analytics?
• And build models?
35. Why state?
• Historical features (e.g. spend amount over 30 days)
• Statistical aggregates
• Machine learning model training
• Why Cross operator? Because of how data is partitioned, allows
aggregation over multiple fields.
36. Distributed In-Memory Databases
• Can support low-latency streaming use cases
• Durability becomes complicated because memory is volatile
• Memory is expensive and limited
• Examples: Memcached, Redis, MemSQL, Ignite, Hazelcast, Distributed
Hash Tables
37.
38. Lab!
• Build and deploy a simple architecture on a streaming platform
• Ingest data
• Engineer features
• Build a model
• Score against the model
• Storm + H2O
• Model build and model score are two different steps
• H2O allows you to export your model as a POJO that can be added as Java
code in a Storm Bolt
39. Goals
• Demonstrate parallel feature computation
• Demonstrate model creation and export using H2O
• Given a labeled data-set (e.g. Titanic) generate a set of scores from
running the model within the Storm topology
• Validate the generated results against a validation dataset (Storm or
offline)
40. Plan of attack
• Step 0:
• Storm topology, executing a model (could be linear regression you coded
yourself), locally on a single node.
• Step 1:
• Storm topology, executing an H2O model locally on a single node
• Step 2:
• Storm topology, executing an H2O model, on multiple nodes (real or virtual)
• Step 3 (Extra credit):
• Install Redis as a state store and use a Redis client to access Redis from Storm
41. Final Deliverable
• A report detailing your experience working with this technology
• What worked?
• What did not work?
• What was setup and usability like?
• What issues did you run into?
• How did you resolve these issues?
• Were you able to get the system operational?
• Were you able to get the results you wanted?