You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 20, 2024. It is now read-only.
I found such a log, whether related to the problem?
Member consumer-8-a776d75e-cdab-47b6-bd2a-b5f68117a7b5 sending LeaveGroup request to coordinator 127.0.0.1:9092 (id: 2147483646 rack: null) due to consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
Can you please share the code which you are using to consume the records? Most probably issue seems with the max poll records. Also keep an eye for the heartbeat which consumer sends to the coordinator because that could also cause cosumers being treated as dead and they will be removed.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Spring boot Version : 2.5.0
Spring Cloud Version : 2020.0.3
I used Spring-Cloud-stream-binder - Kafka and Spring-cloud-stream-binder - Kafka-Streams for kafka production and consumption in the project.
In one project, I subscribed to N topics.
Two nodes were started for service using load balancing.
During run time, it was suddenly discovered that one of the topics had no subscription nodes.
This results in messages being backlogged and lost.
I have to restart these service nodes before I can subscribe to this Topic again.
What is the cause of this, or is there any way to help find some clues.
And is there a way to check at run time so that topics that have lost subscriptions can be re-subscribed?
The text was updated successfully, but these errors were encountered: