Kafka pods in a failed state as it is unable to get the kraft quorum state #11181
shreyasarani23
started this conversation in
General
Replies: 1 comment 2 replies
-
Maybe that is related to the ephemeral storage used in your controllers that can be lost anytime? That would essentially truncate the metadata log. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Below is my kafknodepool spec
And here is my kafka spec
With the above spec the kafka cluster was running fine but suddenly yesterday we are encountering this issue
in all of the broker pods.
But the controller pods are running fine. Even after trying to restart the kafka pods we are facing the same issue. Any idea on this? I guess it has something to do with __cluster_metadata and kraft quorum. Let me know if you need more info on this
Strimzi Version: 0.45.0 (installed through helm chart)
Environment: EKS k8's 1.31
Beta Was this translation helpful? Give feedback.
All reactions