ChatGPT解决这个技术问题 Extra ChatGPT

Can multiple Kafka consumers read same message from the partition

We are planning to write a Kafka consumer(java) which reads Kafka queue to perform an action which is in the message.

As the consumers run independently, will the message is processed by only one consumer at a time? Else all the consumers process the same message as they have their own offset in the partition.

Please help me understand.

looks like kafka doesn't have queues. it has only topics
All kafka topics are ordered sets - in other words, they are queues.
Kafka topics are not queues, because once a message is consumed from a topic, it stays there(unless its lifetime has expired) and the offset moves to the next, whereas for a queue, once a message is consumed, the message is removed from that queue. Ordered sets is also by partitions only.

L
Lukáš Havrlant

It depends on Group ID. Suppose you have a topic with 12 partitions. If you have 2 Kafka consumers with the same Group Id, they will both read 6 partitions, meaning they will read different set of partitions = different set of messages. If you have 4 Kafka cosnumers with the same Group Id, each of them will all read three different partitions etc.

But when you set different Group Id, the situation changes. If you have two Kafka consumers with different Group Id they will read all 12 partitions without any interference between each other. Meaning both consumers will read the exact same set of messages independently. If you have four Kafka consumers with different Group Id they will all read all partitions etc.


You cannot inform other consumers that one message hasn't been processed correctly. But if one consumer fails the other consumer will take his job. Meaning: if you have 12 partitions and 3 consumers with the same Group Id, each consumer reads 4 partitions. If one consumer fails, rebalancing occurs and now the two living consumers will read 6 partitions. Be aware that if you don't update the offset after every message you can read some messages more than once.
2) The offset is defined by topic, partition and group id. The living consumers with the same group id can retrieve the offset because they read the same topic and they have the same group id.
@FaizHalde: In our case: first, we consume each message for realtime processing and later on we consume the same set of messages for the second time when we transfer message from Kafka to HDFS for further analysis. In general, if you have multiple microservices, each of them could read the same messages and do different stuff with them.
What happens when there are more consumers in the same group, let's say 14, and only 12 partitions? Can the redundant consumers still connect to Kafka?
@BiancaTesila The two remaining consumers would be connected but they would read nothing. Basically they would be inactive.
S
SynergyChen

I found this image from OReilly helpful:

https://www.oreilly.com/library/view/kafka-the-definitive/9781491936153/assets/ktdg_04in05.png

Within same group: NO

Two consumers (Consumer 1, 2) within the same group (Group 1) CAN NOT consume the same message from partition (Partition 0).

Across different groups: YES

Two consumers in two groups (Consumer 1 from Group 1, Consumer 1 from Group 2) CAN consume the same message from partition (Partition 0).


K
Karan Khanna

Kafka will deliver each message in the subscribed topics to one process in each consumer group. This is achieved by balancing the partitions between all members in the consumer group so that each partition is assigned to exactly one consumer in the group. Conceptually you can think of a consumer group as being a single logical subscriber that happens to be made up of multiple processes.

In simpler words, Kafka message/record is processed by only one consumer process per consumer group. So if you want multiple consumers to process the message/record you can use different groups for the consumers.


Thank you very much. This helped me to understand the real purpose behind consumers group.

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now