Weitere ähnliche Inhalte Ähnlich wie Apache Pulsar: The Next Generation Messaging and Queuing System (20) Kürzlich hochgeladen (20) Apache Pulsar: The Next Generation Messaging and Queuing System1. © 2019 SPLUNK INC.
The Next Generation
Messaging and Queuing
System
2. © 2019 SPLUNK INC.
Intro
Senior Principal Engineer - Splunk
Co-creator Apache Pulsar
Matteo Merli
Senior Director of Engineering - Splunk
Karthik Ramasamy
4. © 2019 SPLUNK INC.
Messaging
Message passing between
components, application,
services
5. © 2019 SPLUNK INC.
Streaming
Analyze events that just
happened
7. © 2019 SPLUNK INC.
Use cases
● OLTP, Integration
● Main challenges:
○ Latency
○ Availability
○ Data durability
○ High level features
■ Routing, DLQ, delays, individual acks
● Real-time analytics
● Main challenges:
○ Throughput
○ Ordering
○ Stateful processing
○ Batch + Real-Time
Messaging Streaming
9. © 2019 SPLUNK INC.
Apache Pulsar
Data replicated
and synced to
disk
Durability
Low publish
latency of 5ms at
99pct
Low
Latency
Can reach 1.8 M
messages/s in a
single partition
High
Throughput
System is
available if any 2
nodes are up
High
Availability
Take advantage
of dynamic
cluster scaling in
cloud
environments
Cloud
Native
Flexible Pub-Sub and Compute backed by durable log storage
10. © 2019 SPLUNK INC.
Apache Pulsar
Support both
Topic & Queue
semantic in a
single model
Unified
messaging
model
Can support
millions of topics
Highly
Scalable
Lightweight
compute
framework based
on functions
Native
Compute
Supports multiple
users and
workloads in a
single cluster
Multi
Tenant
Out of box
support for
geographically
distributed
applications
Geo
Replication
Flexible Pub-Sub and Compute backed by durable log storage
11. © 2019 SPLUNK INC.
Apache Pulsar project in numbers
192
Contributors
30
Committers
100s
Adopters
4.6K
Github Stars
14. © 2019 SPLUNK INC.
Pulsar Client libraries
● Java — C++ — C — Python — Go — NodeJS — WebSocket APIs
● Partitioned topics
● Apache Kafka compatibility wrapper API
● Transparent batching and compression
● TLS encryption and authentication
● End-to-end encryption
15. © 2019 SPLUNK INC.
Architectural view
Separate layers between
brokers bookies
● Broker and bookies can
be added independently
● Traffic can be shifted very
quickly across brokers
● New bookies will ramp up
on traffic quickly
16. © 2019 SPLUNK INC.
Apache BookKeeper
● Low-latency durable writes
● Simple repeatable read
consistency
● Highly available
● Store many logs per node
● I/O Isolation
Replicated log storage
17. © 2019 SPLUNK INC.
Inside
BookKeeper
Storage optimized for
sequential & immutable data
● IO isolation between write and read
operations
● Does not rely on OS page cache
● Slow consumers won’t impact latency
● Very effective IO patterns:
○ Journal — append only and no reads
○ Storage device — bulk write and
sequential reads
● Number of files is independent from number
of topics
18. © 2019 SPLUNK INC.
Segment
Centric
Storage
In addition to partitioning, messages are stored
in segments (based on time and size)
Segments are independent from each others and
spread across all storage nodes
20. © 2019 SPLUNK INC.
Tiered
Storage
Unlimited topic storage capacity
Achieves the true “stream-storage”: keep the raw
data forever in stream form
Extremely cost effective
21. © 2019 SPLUNK INC.
Schema Registry
Store information on the data structure — Stored in BookKeeper
Enforce data types on topic
Allow for compatible schema evolutions
22. © 2019 SPLUNK INC.
Schema Registry
● Integrated schema in API
● End-to-end type safety — Enforced in Pulsar broker
Producer<MyClass> producer = client
.newProducer(Schema.JSON(MyClass.class))
.topic("my-topic")
.create();
producer.send(new MyClass(1, 2));
Consumer<MyClass> consumer = client
.newConsumer(Schema.JSON(MyClass.class))
.topic("my-topic")
.subscriptionName("my-subscription")
.subscribe();
Message<MyClass> msg = consumer.receive();
Type Safe API
23. © 2019 SPLUNK INC.
Geo
Replication
Scalable asynchronous replication
Integrated in the broker message flow
Simple configuration to add/remove regions
24. © 2019 SPLUNK INC.
Replicated Subscriptions
● Consumption will restart close to where a consumer left off - Small amount of dups
● Implementation
○ Use markers injected into the data flow
○ Create a consistent snapshot of message ids across cluster
○ Establish a relationship: If consumed MA-1 in Cluster-A it must have consumed
MB-2 in Cluster-B
Migrate subscriptions across geo-replicated clusters
25. © 2019 SPLUNK INC.
Multi-Tenancy
● Authentication / Authorization / Namespaces / Admin APIs
● I/O Isolations between writes and reads
○ Provided by BookKeeper
○ Ensure readers draining backlog won’t affect publishers
● Soft isolation
○ Storage quotas — flow-control — back-pressure — rate limiting
● Hardware isolation
○ Constrain some tenants on a subset of brokers or bookies
A single Pulsar cluster supports multiple users and mixed workloads
28. © 2019 SPLUNK INC.
Pulsar Functions
● User supplied compute against a
consumed message
○ ETL, data enrichment, filtering, routing
● Simplest possible API
○ Use language specific “function” notation
○ No SDK required
○ SDK available for more advanced
features (state, metrics, logging, …)
● Language agnostic
○ Java, Python and Go
○ Easy to support more languages
● Pluggable runtime
○ Managed or manual deployment
○ Run as threads, processes or containers
in Kubernetes
29. © 2019 SPLUNK INC.
Pulsar Functions
def process(input):
return input + '!'
import java.util.function.Function;
public class ExclamationFunction
implements Function<String, String> {
@Override
public String apply(String input) {
return input + "!";
}
}
Python Java
Examples
30. © 2019 SPLUNK INC.
Pulsar Functions
● Functions can store state in stream storage
● State is global and replicated
● Multiple instances of the same function can access the same state
● Functions framework provides simple abstraction over state
State management
31. © 2019 SPLUNK INC.
Pulsar Functions
● Implemented on top of Apache BookKeeper “Table Service”
● BookKeeper provides a sharded key/value store based on:
○ Log & Snapshot - Stored as BookKeeper ledgers
○ Warm replicas that can be quickly promoted to leader
● In case of leader failure there is no downtime or huge log to replay
State management
32. © 2019 SPLUNK INC.
Pulsar Functions
State example
import org.apache.pulsar.functions.api.Context;
import org.apache.pulsar.functions.api.PulsarFunction;
public class CounterFunction
implements PulsarFunction<String, Void> {
@Override
public Void process(String input, Context context) {
for (String word : input.split(".")) {
context.incrCounter(word, 1);
}
return null;
}
}
33. © 2019 SPLUNK INC.
Pulsar IO
Connectors Framework based on Pulsar Functions
36. © 2019 SPLUNK INC.
Pulsar SQL
● Uses Presto for interactive SQL
queries over data stored in Pulsar
● Query historic and real-time data
● Integrated with schema registry
● Can join with data from other
sources
37. © 2019 SPLUNK INC.
Pulsar SQL
● Read data directly from BookKeeper into Presto — bypass Pulsar Broker
● Many-to-many data reads
○ Data is split even on a single partition — multiple workers can read data in parallel from single
Pulsar partition
● Time based indexing — Use “publishTime” in predicates to reduce data being read
from disk
38. © 2019 SPLUNK INC.
Pulsar Storage API
● Work in progress to allow direct access to data stored in Pulsar
● Generalization of the work done for Presto connector
● Most efficient way to retrieve and process data from “batch” execution engines