The business systems of an organization are a continuous source of events. Each system also needs to know about events happening in the other systems. Exchanging these events through direct API calls creates a web of inter-dependencies, is fragile and fails to scale. We examine how this problem can be solved through the use of right integration patterns implemented as a light-weight event hub that leverages the power of Kafka and Confluent to operate at enterprise scale. We demonstrate how JavaScript with its event-driven programming model can be a good fit for implementing an event hub that ensures guaranteed message delivery in the face of failures within the individual subscriber systems.
Many organizations having large engineering teams skilled in NodeJS and a multitude of NodeJs applications. We show how these teams can easily leverage the power of Kafka and scale their applications with the right architectural building blocks. We also offer insights from our own experience of building NodeJS based Kafka applications.
8. 1 User Management
5 Support
2 6 Licensing
3 Billing
7 API keys and Access Management
4 Onboarding
8 Many Automa=ons
Tenant Management
9. • Polling, data are almost always old.
• Most of the polling requests results in no change
• IO overheard, even if data is cached
• No Retry implementation needed
• Webhooks are more opDmal
• More real time, less chatty
• SubscripDon, Delivery and Retry implementaDon
Polling and Webhook
10. Node JS
Data
APIs that interact with database.
Events
To build event driven
APIs/Microservices
Non-blocking
Async processing
IO
API Calls
12. Why Node JS
Callback
Complete
• Web Servers
• IntegraDon APIs
• Frontend server
• APIs with Database
• Command line Apps
• Webhooks
• Real time IO, Web sockets
14. 1 Enables Frontend services to directly access data.
2
3 API Authen=ca=on.
4 Click streaming.
Both Node and Kafka follows similar scaling methodology.
Advantages of Ka;a for Node JS
21. Design considera>ons
• How do we deliver events?
• at-most-once
• at-least-once
• exactly-once
ü Onboarding instructions
ü Public key exchange
ü List of events
22.
23.
24.
25.
26.
27. Design considera>ons
ü How do we deliver events?
ü at-most-once
ü at-least-once
ü exactly-once
ü Onboarding instructions
ü Public key exchange
ü List of events
• Easy subscripDon interface
30. Design considera>ons
ü How do we deliver events?
ü at-most-once
ü at-least-once
ü exactly-once
ü Onboarding instructions
ü Public key exchange
ü List of events
ü Easy subscripDon interface
• Produce event
37. Produce to Ka;a
Endpoint
POST http://kafka-host:8082/topics/{topic-name}
Headers:
"Accept": "application/vnd.kafka.json.v2+json, application/vnd.kafka+json,
application/json”
"Content-Type": "application/vnd.kafka.json.v2+json"
Payload:
{
"records": [{
"key": "key-1",
"value": {...}
}]
}
38.
39. Design considera>ons
ü How do we deliver events?
ü at-most-once
ü at-least-once
ü exactly-once
ü Onboarding instructions
ü Public key exchange
ü List of events
ü Easy subscripDon interface
ü Produce event
• Consume and process near real Dme
42. consumer.on('data', function(data) {
startProcessing(data.value);
});
// structure of data
{
value: Buffer.from('hi'), // message contents as a Buffer
size: 2, // size of the message, in bytes
topic: ‘topic', // topic the message comes from
offset: 10, // offset the message was read from
partition: 1, // partition the message was on
key: ‘key-1', // key of the message if present
timestamp: 1628388840187// timestamp of message creation
}
47. Design considerations
ü How do we deliver events?
ü at-most-once
ü at-least-once
ü exactly-once
ü Onboarding instrucDons
ü Public key exchange
ü List of events
ü Easy subscription interface
ü Produce event
ü Consume and process near real Dme
• Ordering and Retry
48. Per subscriber queue
• Notify topic is partitioned by the number of total subscribers.
• Each partition will keep relevant subscribed events.
• After successful delivery, the consumer commits the offset.
• Partitions are limited, scale issue.
49. Multiple consumer groups
• Dispatcher does the ordering and produces the delivery topic.
• Each notifier has subscribed to the topic with a unique consumer group id.
• Manual commit after successful delivery.
50. • Each producers have their own topic.
• Each events are keyed by username.
• Manual commit after successful delivery.
51. • We get DNS error or TCP Error or TIMEOUT
• Queue failed events for retry.
52. • Non 2xx response.
• Queue failed events for retry.
• Subscribers re-subscribe every time they have known error or downtime.
• On subscription trigger retry.
55. Key Take-aways!
ü How Ka#a enables Node JS.
ü Introducing Kafka decouples services.
ü Webhook is useful for delivering events, Webhooks can use Ka#a.
ü Qualities of Node JS. IP, Data, Event and Async.
ü Webhook with at-least-once delivery.
ü Strategies of parallel processing and optimizing consumers.
ü Sequencing and retry.
ü Allow subscribers to resubscribe and trigger retry.