The document provides instructions for demonstrating message queues using Apache Kafka and RabbitMQ. It explains how to start the required servers, create topics and producers, and process streaming data using Kafka Streams. It also covers starting RabbitMQ, sending and receiving messages, exchange types for routing, and different client types.
9. Kill a broker
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic demo
Then kill a leader broker…
bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic demo
Check available messages…
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-
beginning --topic demo
10. Kafka Streams for data processing
Let’s create a file…
echo -e "all streams lead to kafkanhello kafka streamsnjoin kafka summit" >
file-input.txt
...and then create a topic…
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --
partitions 1 --topic streams-file-input
...and publish data to this topic…
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-file-
input < file-input.txt
11. Kafka Streams for data processing
Let’s run an analytics…
bin/kafka-run-class.sh
org.apache.kafka.streams.examples.wordcount.WordCountDemo
And see results in output topic:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-
wordcount-output --from-beginning --property print.key=true --property
value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
12.
13. Kafka Streams for data processing
WordCountDemo:
KTable wordCounts = textLines
// Split each text line, by whitespace, into words.
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("W+")))
// Ensure the words are available as record keys for the next aggregate
operation.
.map((key, value) -> new KeyValue<>(value, value))
// Count the occurrences of each word (record key) and store the results into a
table named "Counts".
.countByKey("Counts")
20. Exchange types
Exchanges - entities where messages are sent.
They take a message and route it into zero or more queues. The routing algorithm used depends on
the exchange type and rules called bindings.
Types:
Direct
Fanout
Topic
Headers
Запустить 2 консьюмера, показать, что сообщения пришли только в один
Рассказать про Default exchange в прошлом примере
The default exchange is a direct exchange with no name (empty string) pre-declared by the broker. It has one special property that makes it very useful for simple applications: every queue that is created is automatically bound to it with a routing key which is the same as the queue name.
Direct exchanges are often used to distribute tasks between multiple workers (instances of the same application) in a round robin manner.Messages are load balanced between consumers and not between queues.
Routing key is ignored. If N queues are bound to a fanout exchange, when a new message is published to that exchange a copy of the message is delivered to all N queues.
A headers exchange is designed for routing on multiple attributes that are more easily expressed as message headers than a routing key. Headers exchanges ignore the routing key attribute. Instead, the attributes used for routing are taken from the headers attribute.